We use Navision 3.70A SQL Server Option.
On the G/L Corresp. General Ledger form we have a checkbox called Calculate Detailed Balance. This calculation runs very very slow - too slow. It is a standard code and we haven't done any customizations there.
I tried to play with SIFTLevelsToMaintain property of the keys in G/L Entry table but with no any significant result.
So what is the cure? How can we boost the performance there?
Any suggestions would be very much appreciated.
Thanks in advance.
Nil desperandum
0
Comments
2) You can monitor HW to see if there are some bottlenecks
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
As for the HardWare monitoring, we have some performance counters - which of them we should pay close attention for this case?
Turning off some SIFTlevels of some SIFT keys give a bit of performance growth but it is not enough. Maybe there is a way of some kind of SIFTlevel tuning for this task?
I think I should say again that it is a standard code so it must work fast, but it doesn't.
First step is to watch CPU, HDD (write and read queue), Network...
That it is standard code is not enough to have fast code. Navision standard code is not optimal.
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
I do collect the data through standard client monitor and then run the Client Monitor form (From the Client Monitor.fob) - it transforms the data from client monitor into the tables, then I run Client Monitor for Key and Cache usage forms - they show no records, so I make a conclusion that everything is fine with key and cache usage there.
This code mainly calculates the flowfileds, maybe there is a way to tune the SIFT tables for better CALCFIELDS performance somehow?
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
Ok, here is the scope:
This code runs from the form 12403 G/L Coressp. General Ledger when we check the Calcualte Detailed Balance checkbox (works very slow - the issue):
CalculateBeginPeriodAmount()
These flowfields have the apropriate SumIndexFelds in the G/L Entry Table and a Table Filter: "G/L Account No.=FIELD(No.),G/L Account No.=FIELD(FILTER(Totaling)),Business Unit Code=FIELD(Business Unit Filter),Global Dimension 1 Code=FIELD(Global Dimension 1 Filter),Global Dimension 2 Code=FIELD(Global Dimension 2 Filter),Posting Date=FIELD(UPPERLIMIT(Date Filter)),Agreement No.=FIELD(Agreement Filter),Source Type=FIELD(Source Type Filter),Source No.=FIELD(Source No. Filter)".
These SumIndexFields are attached to the key in the G/L Entry table: "Source Type,Source No.,G/L Account No.,Global Dimension 1 Code,Global Dimension 2 Code,Business Unit Code,Agreement No.,Posting Date".
MaintainSQLIndex = MaintainSIFTIndex = Yes.
SIFTLevelsToMaintain has the default values.
So what should I do in case your last reply?
If you look to this key - beginning with Option (which have only max 5 values) - it means, if you filter to this, you will take approx. 1/5 of all data from this table (and this table can have millions of entries...). Try to not maintain this index (and SIFT) on SQL and create new one instead, beginning with something what have more values (better selectivity). For example:
Use same fields in SUMIndexFields and try it now...
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
For SIFT-fields, you can need bigger keys, but maintain only the levels you really use.
for key Better create an index or Check which field (Source No. or G/L Account No.) has the most values and use this as the first field.
Try to put the fields with a high number of different values at the beginning (so options, booleans at the end). An exception are dates, these are best kept as the last field.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
I've redesigned some SIFT keys in the G/L Entry table according to your adviñes - it seems it now works a bit faster.
So is it all we can do to optimize the performance there? Maybe there are some other hacks or something? In our case we can't upgrade HW anymore and it doesn't look like we have any bottlenecks with it, so the only way is to make code run faster.
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
For example:
in my Warehouse Entry table, Variant Code alone has more distinct values than Lot No does BUT when we look at per item basis, Lot No will has more distinct values.
In that case, which key has better selectivity: "Item No,Lot No,Variant Code" OR "Item No,Variant Code,Lot No" ??
It means, in you case, "Lot No., Item No, Variant Code..." is best key...
(By default, SQL is using just first field in index to calc the selectivity - can be changed through settings...)
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
1)How many records you have in the table?
2)If you have ONLY a filter on 1 "Item No.", what is the average no. of records selected?
3)If you have ONLY a filter on 1 "Lot No.", what is the average no. of records selected?
4)If you have ONLY a filter on 1 "Variant Code", what is the average no. of records selected?
5)Does it happen that the same "Variant Code" is used on multiple items?
If for example you have respectively (for questions 2,3,4) 1000, 100, 300. The best would be to create a key on "Lot No." and it is not even needed to add the other fields. Scanning 100 records is nothing for SQL.
And like Kine said, starting with "Lot No." would be a good idea because if you know the lot, you know the item. But if you know the item you don't know the lot.
A problem with the "Lot No."-key is: in case you have ONLY a filter on "Item No.", you don't have a key starting with "Item No.", so you would need to create a key for that.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!