I have a Report that is on Item Ledger Entry and is doing a lot of calculation.
I monitorize and see this SQL statmentes taking about 10 minutes:
exec sp_cursorfetch 180150747,2,0,35
and
FETCH API_CURSOR0000000000002E57
I also detected On SQL Profiler an instruction that as this Execution Plan
CPU - 0
Reads - 241
Writes -0
Duration - 300
Clustered Index Insert - Cost 19%
Compute Scalar Cost 0%
Nested Loop (Inner Join) Cost 0%
Index Seek (Non Clustered) - Index $3 - Cost
Key Lookup (Clustered) Cost 75%
What can i do?
Comments
According to Microsoft http://msdn.microsoft.com/en-us/library/dd355379.aspx
The general rules for using FINDSET are the following:
• FINDSET(FALSE,FALSE) - Read-only. This uses no server cursors and the record set is read with a single server call.
I have replace the C/AL Code from the loops REPEAT - UNTIL
FIND(-)
FINDFIRST
TO
FINDSET
and the report runs much faster.
I see on the SQL Profiler diferences on the query ( like styrk mention on the PowerPoint) where the value 2001 put on the Caching Record Set
241 Reads also isn't that much.
2000 is unusual big value for Caching Record Set. The max. value in NAV 2009 R2 you can use is 1000. Default is 50.
Tobias
I don´t know why but the 2000 value on Caching Record Set was already set up.
If i try to change it the following message appears: maximum value allowed is 1000.
I also detected On SQL Profiler an instruction with this parameters:
CPU - 62
Reads - 5514
Writes -0
Duration - 1728
Don´t you think (on this case) that the number of reads is very high?
Thanks in advance.
If there was only 1 record, then it is very high.
If there were 1.000.000 records, then it is very low.
So:how many records were in the filter?
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
So, there is not so much space for optimizations.
If you have thousands or millions of Item Ledger Entries for one single item than 5000 reads isn't much.
Most times it isn't very usefull, to just look in trace file with Profiler and pick a single statement.
What you want to know is, wich statements have in sum produced most Reads, Writes, Duration etc.
There are some free SQL trace analyze tools like ClearTrace or Qure Analyzer that can parse trace files and show you aggregates.
Tobias
The report speed´s up.
When the
FETCH API_CURSOR appears i measure with the Qure Analyzer
Duration:8,22 min
CPU: 34,02 sec
Reads: 760,57K
Writes: 0
Row Count 147,79K
Event Count 4,19K
Thanks for your answers.
Best Regards.
Some other consideration:
T32 "Item Ledger Entry" has - as many other LE tables - a general problem. By default the "Clustered Index" is set to the "Primary Key", which is "Entry No.". Hence, the records are physically sorted by "Entry No.".
But this means, that records belonging to the same "Item No." may be spread over wide areas of that table! Thus, if you query filtering on "Item No." SQL Server might need to read the data over wide areas. The more records there are and the more they are "spread out", the higher is the probabilty that SQL Server starts scanning the table, causing huge I/O etc..
In some cases it is feasible to change this "Clustered Index", e.g. by adding a new Key: Item No., Entry No. - no SQLIndex property, mark as "Clustered".
In this case the records will be sorted by "Item No." first, hence, records belonging to the same Item are physically next to each other! So the range to read those records is dramatically smaller, reducing I/O, avoiding "scan" issues ...
Have in mind that changing the CI is a huge workload on the system (all records have to be re-arranged)! And of course: this needs to be tested thoroughly first!
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
1. If you change to Item No., Entry No. both fields have to be duplicated in every secondary index (CI is part of every sec. index). So every sec. index will use more disc/ram space and produces more I/O when reading/writing that index.
2. Page splits in CI will occur. With just Entry No. as CI we have an ever increasing field as CI. So we will don't see page splits and we don't need to reindx the CI (Item Leder Entries will rarely be deleted). With Item No., Entry No. there will be many page splits and we have to reindex a lot. Item Ledger Entry is often one of the biggest tables in our DBs and reindex takes a lot of time. And if you don't have a SQL Server Enterprise Edition you have to reindex offline.
3. I don't see the scanning problem. We have secondary indexes that can be seeked for Item No..
We only avoid bookmark lookups, when filtering for Item No.
If we filter for Item No. plus some other fileds chances are good that SQL Server will use a secondary index plus bookmark lookups.
There maybe situations where all those overhead is justified, but those are very rare.
Just my opinion ...
Tobias
Indeed it IS a problem which affects most of the LE tables! Item Ledger Entry, G/L Entry, Customer Ledger Entry, etc. they are all affected by the same problem - and from a physical perspective it is a problem, which is responsible for many index scans (or other stupid QEP). "Entry No." is actually the worst CI you could have - true, it has the highest selectivity - but data is allmost never queried on that field. The primary criteria for querying data is - I daresay in 99% of all cases - something with "Item No.", thus arranging the data on basis of this "primary query criteria" is a smart idea! And besides solving some I/O issues this has also tremendous impact on some blocking situation, as in SERIALIZED transactions (LOCKTABLE) usually range-locks are established. The wider that range, the more blocking potential ...
Generally: as you said, having the CI on the PK is mostly a smart idea, but in some cases other considerations may be done ... as so often: it depends ...
True. But actually we are talking about "peanuts" here ... In NAV there are far worse indexes ...
Again, also "Page Splitting" is daily business. Actually that's just a matter of adjusting the approriate Index-Fillfactor (if you have tools that could calculate this optimizerd FF 8) ) and the method you use to defragment/reindex (IMHO: the standard features suck, that's why I use optimized utilities).
While a "Entry No." CI needs a FF of 100%, with changing you need something below, hence, the index will grow. But then again: size does not matter. Technically larger indexes require more effort in updating and maintaining them, but we're talking about microseconds here - something which might be acceptable compared to the potential benefit regarding read-performance and blocking.
So if indeed the write-performance/"experience" is affected strongly depends on the underlying hardware resources.
This also depends. The problem actually occurs with Items that are daramatically more often posted than others, thus if the ratio of a certain "Item No." is remarkably greater than others, as this actually decreases the "selectivity" of that field ...
If you have a problematic query on "Item No." indeed you'll mostly see the QEP performing "Index Seeks" (rarely a "Scan") but causing thousands of "Reads" as the "Key Lookup" operation is causing most of the effert ...
How do you avoid "Bookmark Lookups"? If NAV is firing a SELECT * you'll always have a key lookup, except if your index is covering - means in this case: it INCLUDES all (!) other fields ... Creating such an index on T32 is something a absolutely would NOT recommend ...
Not so rare in my experience - and I deal with this every day.
To sum up:
Changing a Clustered Index is something which needs to be discussed within context of the current problem. There are advantages and disadvantages which have to be compared (I guess Tobias and I gave an example for such a discussion) - what's a solution for one system might be a problem for another; with NAV/SQL performance there's hardly something like "IF THIS HAPPENS THEN DO THAT", it's mostly "IT DEPENDS" ...
Thus, having all this in mind: just test to find out if this could help you!
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
I see your points.
If I'll find the time I will do some tests to find out, if they overrule my points :-)
What I would say is:
We avoid "Bookmark Lookups" when changing CI to "Item No.", "Entry No.".
So that's the one big advantage of changing the CI.
Tobias
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool