SalesLine2.RESET; SalesLine2.SETCURRENTKEY("Document Type",Status,Type,"No.","Variant Code","Logistic Status","Location Code",DateCreated, TimeCreated,"Outstanding Qty. (Base)"); SalesLine2.SETRANGE("Document Type",SalesLine2."Document Type"::Order); SalesLine2.SETRANGE(Status,SalesLine2.Status::"Waiting for Payment"); SalesLine2.SETRANGE(Type,SalesLine2.Type::Item); SalesLine2.SETRANGE("No.",Item."No."); SalesLine2.SETRANGE("Variant Code", SalesLine."Variant Code"); SalesLine2.SETRANGE("Logistic Status",SalesLine2."Logistic Status"::" "); SalesLine2.SETRANGE("Location Code",SalesLine."Location Code"); SalesLine2.CALCSUMS("Outstanding Qty. (Base)");
SalesLine2.SETRANGE("Variant Code", SalesLine."Variant Code");
IF SalesLine."Variant Code" <> '' THEN SalesLine2.SETRANGE("Variant Code", SalesLine."Variant Code");
Comments
2) SIFT tables on SalesLine table are very sensitive and is good to think about not maintaining them. In most cases calculating the sum without SIFT is fast enough (if you want sum 50 lines for selected document, you do not need to maintain SIFT table for that). If you have SIFT on this table and you are not maintaining the SIFT tables on SQL (by doing optimize of the table or maintain through some script), you can find out that the SIFT tables are HUGE and are full of records with 0 values for documents etc. which are no longer in the Sales Line table. And if you have the SIFT table full of such a record, the sum can be slow because that.
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
Just moving to SQL for the first time and restored in the database. So I cant imagine the SIFT tables have many entries from old documents that are not there any more.
I was curious why it would be calculating the SUM of all sumindexfields that are in ALL my keys. I would understand if it was just doing it on the fields that were in ONE of the keys.
So I have two test systems, one on Native, and one in SQL, both newly restored and both exactly the same. But the code I showed takes 1 second on Native, and 30 to 60 on SQL.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
I never fully understood why Navision calculates all values and am not sure if there is no performance issue with that.
If you calculate only one value with some filters you can easily create a covering index which you cannot if it calculates all.
Try to remove the setcurrentkey. It is nonsense for SQL.
Also eliminate the filters you do not use like the Variant and maybe more.
Look at the definition of the SIFT Table. Are the fields you filter on in front? Optimising SIFT is not easy and very customer specific.
Good luck.
That is the SQL that was generated by CALCSUMS on one field.
Removing the SETCURRENTKEY was no good as it gave me the error that the current key does not have all the fields that have been filtered, so the calcsums could not execute. It must have been using the primary key then, and as you see above I filter on more than this.
Try to enable "MaintainSIFTINdex" on the proper key.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
No please.
This sift level is off by default.
Archive your saleslines like Kamil said.
Always try to minimise the number of salesline. This also goes for Native databases.
But I think it's a better idea to first test one simple flag, then changing business rules like archiving data and such ... especially for temporary reasons ... .
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
"No.","Location Code","Document Type",Status,Type,"Logistic Status","Variant Code"
might also help.
BTW : it is a VERY bad idea to put decimals in the key, especially if you use that field as SIFT-field.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
I don't know. This looks like an index defined especialy to make this flowfield/sumindexfield work. I might be disabled on SQL.
If you redesign indexes make them as short as possible but keep Sorting in mind. If sorting becomes to high a cost you might have to redisign part of the app.
If you have a large set in SQL the sorting takes place in TempDB. AFAK the query optimiser does not take that cost in account but maybe that is old or wrong information.
I know that you know but does Mauddib know that you know...
duh...
2) We didn't talk about HW: can you send us the HW specification of the client and server?
3) Which version of MS SQL and which SP?
4) Version of client?
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
I don't know if he knows ... but he should know. You're right that I should have elaborated it more.
Nobody talked about "Included Columns" either.
May be it's worth trying to go that route - in test fase ? If you do, know that the Included columns won't be backed up with a NAV backup... .
Something else I wanted to know:
How critical is this query? Is it something that is executed many times a day, or isn't it that "hot"?
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
I may be hijacking the thread but this is an interesting subject Mark brought up.
At some point I started thinking like Mark: unless you really need your data sorted a certain way (say for a grouping report or some similar concept) it is better not to use SetCurrentkey since that adds an extra load because Navision, in a retarded way, sends an Order By allong with the Select statement ](*,)
But then I started doing that and I saw a drop in performance. Starting with the assumption I already designed a SQL viable key in Navision it is better to use Setcurrentkey than not to especially for large result sets.
In one example I was filtering on Sales Line By Customer and Customer PO No. The result set could have been anywhere between 1K and 40K lines. The problem is that, if you do not specify a key, Navision sends an Order By for the primary key and that makes SQL sometimes decide to do a clustered index scan rather than using my good Customer PO No., Customer No. key (even more retarded) ](*,) ](*,) :-s
Sooo ... I am still torn now as to whether to use it or not ... (and for now I am still using it :oops: )
P.S.1. I think since MS will take about 1000 years to optimize standard Navision for SQL they could make our life so much easier if they added a boolean parameter to setcurrentkey: false for not sending an Order By on SQL and true for sending an Order By. If you set the default to true (and make it a "silent" parameter like with Locktable) then all the standard code would function the same both on native and SQL but would give us so much more control over performance tuning. :evil: [-o<
P.S.2. Also Mark while covering indexes are nice I cannot remember how many times I created them on test db while tuning and then forgot to add them to the live or had them overwritten by simply adding a new field to it on Navision.
I ran into the order by clause issue a few days ago, and had to fix code.
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n
"Item No.,Open,Variant Code,Positive,Location Code,Posting Date,Expiration Date,Lot No.,Serial No.".
and filters on
"Item No.",Open,"Variant Code",Positive,"Location Code".
My colleague had added "Document No." to it and when posting transfers, sometimes SQL decided to use a key starting with "Document No." for some report. Even after a rebuildindex!
I disabled the key in SQL, added again the original key BEFORE the changed one and used "Item No.,Open,Variant Code,Positive,Location Code" as SQLIndex and that fixed the problem.
The weirdest part was that it was on a SQL2000 and not on SQL2005. It seems that SQL sometimes puts to much weight on the order by clause for selecting a key.
In the SETCURRENTKEY, there should be an option (boolean) to say that Navision must send an ORDER BY as stated in the SETCURRENTKEY and not as the KEY in finds in Navision and adding the primary key fields!
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
To follow up with some further advice, since the original poster also said that he is a SQL newbie, he should be changing the SQLIndex property of the key from within Navision.
Also, in the case of Mark's suggestion to drop the SETCURRENTKEY, I know that you have mentioned that a lot of new information has been learned, but in the "Tuning Navision for Better Performance Rev 5" document published by Microsoft, they explicitly said to always use SETCURRENTKEY, AND to SETRANGE/SETFILTER in order, and NOT to leave holes, which has been followed by the OP.
I certainly trust the newer information that you have brought to the table, Mark, but MS needs to catch up!
This is true. I should have mentioned that. Sorry.
Navision does that to simulate Native behaviour.
SQL 2005 has a tendency to go for the clustered index much faster than you would likt it to do.
If you use SETCURRENTKEY on SQL then try yo make your NAV Key short also. This might sound strange but even yesterday I noticed some issues at a client that when you make an index shorter and more selective and by that destroying the orriginal index used for order by you might end up with to many reads.
Split that up in 2 pieces by the way. If you read Uncommitted data than reading 1000 records while wanting 10 is something SQL can handle beatifuly wihtout issues. If you read UPDLOCK you might introduce locking in the database when working with small recordsets.
Confused? I am... :?
I guess I just need to read a LOT more about SIFT and SiftLevels and so on. Ive programmed a lot of Navision on and off SQL before, but this is my first move from one to the other.
I can recommend the SQLPerform workshops to get you up to speed... .
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
Senior SDE
Microsoft Dynamics Nav | Server & Tools