Running out of entry numbers – use the ones below zero?

John_ReinholdJohn_Reinhold Member Posts: 19
Issue: If a client is using NAV 2009 on SQL for many years, there is a risk of running out of entry numbers, for example in the table G/L Entry. And imagine that it is imperative to continue using the same company instead of creating a new one.

Imagine this scenario: The first entry no. was set to 1 (many years ago) and then incremented over time as posting happened and continues to happen. In some time, it will be close to the maximum possible value of an integer, 2,147,483,647. When it hits, no more posting is possible as NAV will throw a runtime error. The integer’s minimum value is not zero, but in fact -2,147,483,647. So only half of the nearly 4.3 billion possible values are utilized. Brainstorming on possible solutions, the idea of using the negative range of -2,147,483,647 to -1 has come up.

Of course, I know that if it is even possible, it is a major development task. The task includes finding and modifying every piece of C/AL code that reads, filters out, increments, and assigns entry numbers, just to mention some of the work. I would also expect certain other changes such as field properties and relations. I must decide (and implement in the C/AL code) two principal approaches: start new entry numbers at -2,147,483,647 and increment or start at -1 and decrement. All that is OK, I’ll guess that I have all the time in the world – and I do enjoy a challenge. I am very curious if anyone has tried this approach and succeeded? I don’t know if any minimum values have been set to 1 inside the application itself – in the .exe and .dll files.

I know that if all this is done, any client using this modification will be running NAV on a completely unsupported platform. And I also know that asking for best practice to solve the problem results in only one correct answer: create a new company.

Additional information: Changing the datatype of the field Entry No. from integer to big integer has also been considered and rejected. There are a huge number of references to the field from fields in other tables, that also need to have changed their datatype – some in fields that are part of keys. And the required service window for down time to convert and rebuild keys are not very business friendly.

Any constructive thoughts on this issue are very much appreciated.
Regards,
John Reinhold

Answers

  • ThomasHagenmeyerThomasHagenmeyer Member Posts: 14
    Hello John,

    If I was to do such a task, I'd try to not change the code, but the data. That, too, is a massive task I think: for each entry no., you have to identifiy the places (fields) where it is stored. Examples are the journal records storing from and to entry no. information, or value entries where the item ledger entry no. is stored. All the fields have to be treated together in a consistent way (excluding fields where entry no. is zero - this of course means "no entry" and has to be kept). Also analyze the constraints on database level (SQL Server constraints).

    Having done that, I'd create a bunch of SQL statements to handle all these tables/fields. Every existing entry no. is decrased by -2,147,483,647 in all places where it is stored. Assuming you haven't reached the highest possible number in the positive range, you end up with negative numbers only.

    Then, the 0 hits our logic: we cannot let the normal coding increment the highest existing entry no. by 1 for the future postings starting from below zero, because eventually a new entry would get the zero entry no., which means "no entry" in the journal and maybe other places. If you are allowed to change the posting codeunits, you could change just the incrementing part of the entry creating routine to check that and increment by 2 instead of 1 when the highest existing entry no. is -1.

    An additional caveat: I don't know, if there are consistency checks checking on consecutive entry numbers - if so, the zero-gap has to be handled there also.

    That would be my approach - I'd try to do everything to avoid that.

    Best Regards,

    Thomas
  • John_ReinholdJohn_Reinhold Member Posts: 19
    Hello Thomas,

    (maybe I should have stated that I am a deep technical developer (C/AL, AL, SQL, .net etc.), started out in the 90’s and I have access to changes just about every piece of code)

    That said, your suggestion is interesting. Your point of not changing any code (except skipping Entry No. = 0) is very good. I fact, I did think about subtracting 2,1 billion from all the entry numbers – and their refs. So far (I'm not done), I've spotted thirteen tables (using OMA), that refs to Entry No. in G/L Entry. Some of these also has the field in one or more keys, some primary. And those fields refs to yet other fields. All these must also be recalculated along – that will also trigger updates of the keys. But I still can’t tell if the application itself (.exe / .dll) works with negative entry numbers. And it would be bad to use up many hours discovering that it does not work. I did consult a couple of engineers at Microsoft. And their response is ‘if you want to be sure, create a new company’. But how could they say anything else? I know that I’m far out with this issue, but I was surprised that they haven’t even considered it in BC.

    I'll give your suggestion a renewed consideration – and then maybe do some prototyping and measuring of process time, so thanks 😊

    /John
    Regards,
    John Reinhold
  • vaprogvaprog Member Posts: 1,140
    I can't imaging how the platform ("the application itself") would expect the "G/L Entry"."Entry No." to have any particular property. After all, it is a regular field in a regular table. It really should affect only the application (the code contained in objects).

    I personally would feel more comfortable in changing the type of the "Entry No." to BigInteger. More paces, than the 13 tables you mention above needed to be changed, including variables. But negative Entry No.s sound more like trouble to me.
Sign In or Register to comment.