sp_cursor fetch, FETCH API CURSOR - How to handle with this?

nav_student
Member Posts: 175
I have a Report that is on Item Ledger Entry and is doing a lot of calculation.
I monitorize and see this SQL statmentes taking about 10 minutes:
CPU - 0
Reads - 241
Writes -0
Duration - 300
Clustered Index Insert - Cost 19%
Compute Scalar Cost 0%
Nested Loop (Inner Join) Cost 0%
Index Seek (Non Clustered) - Index $3 - Cost
Key Lookup (Clustered) Cost 75%
What can i do?
I monitorize and see this SQL statmentes taking about 10 minutes:
exec sp_cursorfetch 180150747,2,0,35and
FETCH API_CURSOR0000000000002E57I also detected On SQL Profiler an instruction that as this Execution Plan
CPU - 0
Reads - 241
Writes -0
Duration - 300
Clustered Index Insert - Cost 19%
Compute Scalar Cost 0%
Nested Loop (Inner Join) Cost 0%
Index Seek (Non Clustered) - Index $3 - Cost
Key Lookup (Clustered) Cost 75%
What can i do?
0
Comments
-
Take a look at Jörg Stryk's recent session at Techdays - Mibuso frontpage.0
-
Thanks rhpnt i see a the stirk´s PowerPoint where he expalins the workload of FINDSET, FINDFIRST,FIND(-) on SQL (trough cursor´s)
According to Microsoft http://msdn.microsoft.com/en-us/library/dd355379.aspx
The general rules for using FINDSET are the following:
• FINDSET(FALSE,FALSE) - Read-only. This uses no server cursors and the record set is read with a single server call.
I have replace the C/AL Code from the loops REPEAT - UNTIL
FIND(-)
FINDFIRST
TO
FINDSET
and the report runs much faster.
I see on the SQL Profiler diferences on the query ( like styrk mention on the PowerPoint)SELECT TOP * FROM "databasename"."dbo"."CompanyName$Item Ledger Entry" WITH (READUNCOMMITTED) WHERE (("Item No_"=@P1)) ORDER BY "Item No_","Entry Type","Variant Code","Drop Shipment","Location Code","Posting Date","Entry No_"
SELECT TOP 2001 * FROM "databasename"."dbo"."CompanyName$Item Ledger Entry" WITH (READUNCOMMITTED) WHERE (("Item No_"=@P1)) ORDER BY "Item No_","Entry Type","Variant Code","Drop Shipment","Location Code","Posting Date","Entry No_"
where the value 2001 put on the Caching Record Set0 -
Thanks ndbcs.
I don´t know why but the 2000 value on Caching Record Set was already set up.
If i try to change it the following message appears: maximum value allowed is 1000.
I also detected On SQL Profiler an instruction with this parameters:
CPU - 62
Reads - 5514
Writes -0
Duration - 1728SELECT TOP 2001 * FROM "databasename"."dbo"."companyname$Item Ledger Entry" WITH (READUNCOMMITTED) WHERE (("Item No_"=@P1)) ORDER BY "Item No_","Entry Type","Variant Code","Drop Shipment","Location Code","Posting Date","Entry No_"
Don´t you think (on this case) that the number of reads is very high?
Thanks in advance.0 -
I'll give you THE SQL Server answer: it depends.
If there was only 1 record, then it is very high.
If there were 1.000.000 records, then it is very low.
So:how many records were in the filter?Regards,Alain Krikilion
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!0 -
You filter for "Item No." and you have an index starting with "Item No." (i guess).
So, there is not so much space for optimizations.
If you have thousands or millions of Item Ledger Entries for one single item than 5000 reads isn't much.
Most times it isn't very usefull, to just look in trace file with Profiler and pick a single statement.
What you want to know is, wich statements have in sum produced most Reads, Writes, Duration etc.
There are some free SQL trace analyze tools like ClearTrace or Qure Analyzer that can parse trace files and show you aggregates.
Tobias0 -
I solve the problem by using FINDSET.
The report speed´s up.
When the
FETCH API_CURSOR appears i measure with the Qure Analyzer
Duration:8,22 min
CPU: 34,02 sec
Reads: 760,57K
Writes: 0
Row Count 147,79K
Event Count 4,19K
Thanks for your answers.
Best Regards.0 -
Great to have another example how FINDSET could improve things by avoiding these "cursors"
Some other consideration:
T32 "Item Ledger Entry" has - as many other LE tables - a general problem. By default the "Clustered Index" is set to the "Primary Key", which is "Entry No.". Hence, the records are physically sorted by "Entry No.".
But this means, that records belonging to the same "Item No." may be spread over wide areas of that table! Thus, if you query filtering on "Item No." SQL Server might need to read the data over wide areas. The more records there are and the more they are "spread out", the higher is the probabilty that SQL Server starts scanning the table, causing huge I/O etc..
In some cases it is feasible to change this "Clustered Index", e.g. by adding a new Key: Item No., Entry No. - no SQLIndex property, mark as "Clustered".
In this case the records will be sorted by "Item No." first, hence, records belonging to the same Item are physically next to each other! So the range to read those records is dramatically smaller, reducing I/O, avoiding "scan" issues ...
Have in mind that changing the CI is a huge workload on the system (all records have to be re-arranged)! And of course: this needs to be tested thoroughly first!Jörg A. Stryk (MVP - Dynamics NAV)
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool0 -
I don't think that it's a problem to have Entry No. as the CI, but I think it's in general the best CI you can have (unique, narrow, static, ever-increasing) (if they change it to auto-increment it would be much better, but that's another story).
1. If you change to Item No., Entry No. both fields have to be duplicated in every secondary index (CI is part of every sec. index). So every sec. index will use more disc/ram space and produces more I/O when reading/writing that index.
2. Page splits in CI will occur. With just Entry No. as CI we have an ever increasing field as CI. So we will don't see page splits and we don't need to reindx the CI (Item Leder Entries will rarely be deleted). With Item No., Entry No. there will be many page splits and we have to reindex a lot. Item Ledger Entry is often one of the biggest tables in our DBs and reindex takes a lot of time. And if you don't have a SQL Server Enterprise Edition you have to reindex offline.
3. I don't see the scanning problem. We have secondary indexes that can be seeked for Item No..
We only avoid bookmark lookups, when filtering for Item No.
If we filter for Item No. plus some other fileds chances are good that SQL Server will use a secondary index plus bookmark lookups.
There maybe situations where all those overhead is justified, but those are very rare.
Just my opinion ...
Tobias0 -
You're absolutely right! It's all a matter of balance and "If this then that or maybe not" - that's why I said "test thoroughly". But have in mind that your concerns basically affect 99% of all indexes in NAV, thus changing the CI on T32 is not such an unsusal thing.ndbcs wrote:I don't think that it's a problem to have Entry No. as the CI, but I think it's in general the best CI you can have (unique, narrow, static, ever-increasing) (if they change it to auto-increment it would be much better, but that's another story).
Generally: as you said, having the CI on the PK is mostly a smart idea, but in some cases other considerations may be done ... as so often: it depends ...ndbcs wrote:1. If you change to Item No., Entry No. both fields have to be duplicated in every secondary index (CI is part of every sec. index). So every sec. index will use more disc/ram space and produces more I/O when reading/writing that index.ndbcs wrote:2. Page splits in CI will occur. With just Entry No. as CI we have an ever increasing field as CI. So we will don't see page splits and we don't need to reindx the CI (Item Leder Entries will rarely be deleted). With Item No., Entry No. there will be many page splits and we have to reindex a lot. Item Ledger Entry is often one of the biggest tables in our DBs and reindex takes a lot of time. And if you don't have a SQL Server Enterprise Edition you have to reindex offline.
While a "Entry No." CI needs a FF of 100%, with changing you need something below, hence, the index will grow. But then again: size does not matter. Technically larger indexes require more effort in updating and maintaining them, but we're talking about microseconds here - something which might be acceptable compared to the potential benefit regarding read-performance and blocking.
So if indeed the write-performance/"experience" is affected strongly depends on the underlying hardware resources.ndbcs wrote:3. I don't see the scanning problem. We have secondary indexes that can be seeked for Item No..
If you have a problematic query on "Item No." indeed you'll mostly see the QEP performing "Index Seeks" (rarely a "Scan") but causing thousands of "Reads" as the "Key Lookup" operation is causing most of the effert ...ndbcs wrote:We only avoid bookmark lookups, when filtering for Item No.
If we filter for Item No. plus some other fileds chances are good that SQL Server will use a secondary index plus bookmark lookups.ndbcs wrote:There maybe situations where all those overhead is justified, but those are very rare.
To sum up:
Changing a Clustered Index is something which needs to be discussed within context of the current problem. There are advantages and disadvantages which have to be compared (I guess Tobias and I gave an example for such a discussion) - what's a solution for one system might be a problem for another; with NAV/SQL performance there's hardly something like "IF THIS HAPPENS THEN DO THAT", it's mostly "IT DEPENDS" ...
Thus, having all this in mind: just test to find out if this could help you!Jörg A. Stryk (MVP - Dynamics NAV)
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool0 -
Jörg, thanks for your constructive reply.
I see your points.
If I'll find the time I will do some tests to find out, if they overrule my points :-)How do you avoid "Bookmark Lookups"? If NAV is firing a SELECT * you'll always have a key lookup, except if your index is covering - means in this case: it INCLUDES all (!) other fields ... Creating such an index on T32 is something a absolutely would NOT recommend ...
What I would say is:
We avoid "Bookmark Lookups" when changing CI to "Item No.", "Entry No.".
So that's the one big advantage of changing the CI.
Tobias0 -
Ah, now I understand! Yep, I agree ...Jörg A. Stryk (MVP - Dynamics NAV)
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool0
Categories
- All Categories
- 73 General
- 73 Announcements
- 66.6K Microsoft Dynamics NAV
- 18.7K NAV Three Tier
- 38.4K NAV/Navision Classic Client
- 3.6K Navision Attain
- 2.4K Navision Financials
- 116 Navision DOS
- 851 Navision e-Commerce
- 1K NAV Tips & Tricks
- 772 NAV Dutch speaking only
- 617 NAV Courses, Exams & Certification
- 2K Microsoft Dynamics-Other
- 1.5K Dynamics AX
- 320 Dynamics CRM
- 111 Dynamics GP
- 10 Dynamics SL
- 1.5K Other
- 990 SQL General
- 383 SQL Performance
- 34 SQL Tips & Tricks
- 35 Design Patterns (General & Best Practices)
- 1 Architectural Patterns
- 10 Design Patterns
- 5 Implementation Patterns
- 53 3rd Party Products, Services & Events
- 1.6K General
- 1.1K General Chat
- 1.6K Website
- 83 Testing
- 1.2K Download section
- 23 How Tos section
- 252 Feedback
- 12 NAV TechDays 2013 Sessions
- 13 NAV TechDays 2012 Sessions