sp_cursor fetch, FETCH API CURSOR - How to handle with this?

nav_studentnav_student Member Posts: 175
edited 2011-10-09 in SQL Performance
I have a Report that is on Item Ledger Entry and is doing a lot of calculation.

I monitorize and see this SQL statmentes taking about 10 minutes:
exec sp_cursorfetch 180150747,2,0,35
and
FETCH API_CURSOR0000000000002E57
I also detected On SQL Profiler an instruction that as this Execution Plan
CPU - 0
Reads - 241
Writes -0
Duration - 300

Clustered Index Insert - Cost 19%
Compute Scalar Cost 0%
Nested Loop (Inner Join) Cost 0%
Index Seek (Non Clustered) - Index $3 - Cost
Key Lookup (Clustered) Cost 75%

What can i do?

Comments

  • rhpntrhpnt Member Posts: 688
    Take a look at Jörg Stryk's recent session at Techdays - Mibuso frontpage.
  • nav_studentnav_student Member Posts: 175
    Thanks rhpnt i see a the stirk´s PowerPoint where he expalins the workload of FINDSET, FINDFIRST,FIND(-) on SQL (trough cursor´s)

    According to Microsoft http://msdn.microsoft.com/en-us/library/dd355379.aspx
    The general rules for using FINDSET are the following:
    • FINDSET(FALSE,FALSE) - Read-only. This uses no server cursors and the record set is read with a single server call.


    I have replace the C/AL Code from the loops REPEAT - UNTIL
    FIND(-)
    FINDFIRST
    TO
    FINDSET

    and the report runs much faster.

    I see on the SQL Profiler diferences on the query ( like styrk mention on the PowerPoint)
    SELECT TOP * FROM "databasename"."dbo"."CompanyName$Item Ledger Entry" WITH (READUNCOMMITTED)   WHERE (("Item No_"=@P1)) ORDER BY "Item No_","Entry Type","Variant Code","Drop Shipment","Location Code","Posting Date","Entry No_"
    
    SELECT TOP 2001 * FROM "databasename"."dbo"."CompanyName$Item Ledger Entry" WITH (READUNCOMMITTED)   WHERE (("Item No_"=@P1)) ORDER BY "Item No_","Entry Type","Variant Code","Drop Shipment","Location Code","Posting Date","Entry No_"
    
    where the value 2001 put on the Caching Record Set
  • ndbcsndbcs Member Posts: 31
    Duration is not in seconds, but in micro- or milliseconds (msdn).
    241 Reads also isn't that much.
    2000 is unusual big value for Caching Record Set. The max. value in NAV 2009 R2 you can use is 1000. Default is 50.

    Tobias
  • nav_studentnav_student Member Posts: 175
    Thanks ndbcs.

    I don´t know why but the 2000 value on Caching Record Set was already set up.

    If i try to change it the following message appears: maximum value allowed is 1000.

    I also detected On SQL Profiler an instruction with this parameters:
    CPU - 62
    Reads - 5514
    Writes -0
    Duration - 1728
    SELECT TOP 2001 * FROM "databasename"."dbo"."companyname$Item Ledger Entry" WITH (READUNCOMMITTED)   WHERE (("Item No_"=@P1)) ORDER BY "Item No_","Entry Type","Variant Code","Drop Shipment","Location Code","Posting Date","Entry No_"
    

    Don´t you think (on this case) that the number of reads is very high?

    Thanks in advance.
  • krikikriki Member, Moderator Posts: 9,094
    I'll give you THE SQL Server answer: it depends.

    If there was only 1 record, then it is very high.
    If there were 1.000.000 records, then it is very low.

    So:how many records were in the filter?
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • ndbcsndbcs Member Posts: 31
    You filter for "Item No." and you have an index starting with "Item No." (i guess).
    So, there is not so much space for optimizations.

    If you have thousands or millions of Item Ledger Entries for one single item than 5000 reads isn't much.

    Most times it isn't very usefull, to just look in trace file with Profiler and pick a single statement.
    What you want to know is, wich statements have in sum produced most Reads, Writes, Duration etc.

    There are some free SQL trace analyze tools like ClearTrace or Qure Analyzer that can parse trace files and show you aggregates.


    Tobias
  • nav_studentnav_student Member Posts: 175
    I solve the problem by using FINDSET.

    The report speed´s up.


    When the
    FETCH API_CURSOR appears i measure with the Qure Analyzer
    Duration:8,22 min
    CPU: 34,02 sec
    Reads: 760,57K
    Writes: 0
    Row Count 147,79K
    Event Count 4,19K


    Thanks for your answers.

    Best Regards.
  • strykstryk Member Posts: 645
    Great to have another example how FINDSET could improve things by avoiding these "cursors" :wink:

    Some other consideration:
    T32 "Item Ledger Entry" has - as many other LE tables - a general problem. By default the "Clustered Index" is set to the "Primary Key", which is "Entry No.". Hence, the records are physically sorted by "Entry No.".
    But this means, that records belonging to the same "Item No." may be spread over wide areas of that table! Thus, if you query filtering on "Item No." SQL Server might need to read the data over wide areas. The more records there are and the more they are "spread out", the higher is the probabilty that SQL Server starts scanning the table, causing huge I/O etc..
    In some cases it is feasible to change this "Clustered Index", e.g. by adding a new Key: Item No., Entry No. - no SQLIndex property, mark as "Clustered".
    In this case the records will be sorted by "Item No." first, hence, records belonging to the same Item are physically next to each other! So the range to read those records is dramatically smaller, reducing I/O, avoiding "scan" issues ...

    Have in mind that changing the CI is a huge workload on the system (all records have to be re-arranged)! And of course: this needs to be tested thoroughly first!
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • ndbcsndbcs Member Posts: 31
    I don't think that it's a problem to have Entry No. as the CI, but I think it's in general the best CI you can have (unique, narrow, static, ever-increasing) (if they change it to auto-increment it would be much better, but that's another story).

    1. If you change to Item No., Entry No. both fields have to be duplicated in every secondary index (CI is part of every sec. index). So every sec. index will use more disc/ram space and produces more I/O when reading/writing that index.

    2. Page splits in CI will occur. With just Entry No. as CI we have an ever increasing field as CI. So we will don't see page splits and we don't need to reindx the CI (Item Leder Entries will rarely be deleted). With Item No., Entry No. there will be many page splits and we have to reindex a lot. Item Ledger Entry is often one of the biggest tables in our DBs and reindex takes a lot of time. And if you don't have a SQL Server Enterprise Edition you have to reindex offline.

    3. I don't see the scanning problem. We have secondary indexes that can be seeked for Item No..

    We only avoid bookmark lookups, when filtering for Item No.
    If we filter for Item No. plus some other fileds chances are good that SQL Server will use a secondary index plus bookmark lookups.

    There maybe situations where all those overhead is justified, but those are very rare.


    Just my opinion ...

    Tobias
  • strykstryk Member Posts: 645
    You're absolutely right! It's all a matter of balance and "If this then that or maybe not" - that's why I said "test thoroughly". But have in mind that your concerns basically affect 99% of all indexes in NAV, thus changing the CI on T32 is not such an unsusal thing.
    ndbcs wrote:
    I don't think that it's a problem to have Entry No. as the CI, but I think it's in general the best CI you can have (unique, narrow, static, ever-increasing) (if they change it to auto-increment it would be much better, but that's another story).
    Indeed it IS a problem which affects most of the LE tables! Item Ledger Entry, G/L Entry, Customer Ledger Entry, etc. they are all affected by the same problem - and from a physical perspective it is a problem, which is responsible for many index scans (or other stupid QEP). "Entry No." is actually the worst CI you could have - true, it has the highest selectivity - but data is allmost never queried on that field. The primary criteria for querying data is - I daresay in 99% of all cases - something with "Item No.", thus arranging the data on basis of this "primary query criteria" is a smart idea! And besides solving some I/O issues this has also tremendous impact on some blocking situation, as in SERIALIZED transactions (LOCKTABLE) usually range-locks are established. The wider that range, the more blocking potential ...
    Generally: as you said, having the CI on the PK is mostly a smart idea, but in some cases other considerations may be done ... as so often: it depends ...
    ndbcs wrote:
    1. If you change to Item No., Entry No. both fields have to be duplicated in every secondary index (CI is part of every sec. index). So every sec. index will use more disc/ram space and produces more I/O when reading/writing that index.
    True. But actually we are talking about "peanuts" here ... In NAV there are far worse indexes ...
    ndbcs wrote:
    2. Page splits in CI will occur. With just Entry No. as CI we have an ever increasing field as CI. So we will don't see page splits and we don't need to reindx the CI (Item Leder Entries will rarely be deleted). With Item No., Entry No. there will be many page splits and we have to reindex a lot. Item Ledger Entry is often one of the biggest tables in our DBs and reindex takes a lot of time. And if you don't have a SQL Server Enterprise Edition you have to reindex offline.
    Again, also "Page Splitting" is daily business. Actually that's just a matter of adjusting the approriate Index-Fillfactor (if you have tools that could calculate this optimizerd FF 8) ) and the method you use to defragment/reindex (IMHO: the standard features suck, that's why I use optimized utilities).
    While a "Entry No." CI needs a FF of 100%, with changing you need something below, hence, the index will grow. But then again: size does not matter. Technically larger indexes require more effort in updating and maintaining them, but we're talking about microseconds here - something which might be acceptable compared to the potential benefit regarding read-performance and blocking.
    So if indeed the write-performance/"experience" is affected strongly depends on the underlying hardware resources.
    ndbcs wrote:
    3. I don't see the scanning problem. We have secondary indexes that can be seeked for Item No..
    This also depends. The problem actually occurs with Items that are daramatically more often posted than others, thus if the ratio of a certain "Item No." is remarkably greater than others, as this actually decreases the "selectivity" of that field ...
    If you have a problematic query on "Item No." indeed you'll mostly see the QEP performing "Index Seeks" (rarely a "Scan") but causing thousands of "Reads" as the "Key Lookup" operation is causing most of the effert ...
    ndbcs wrote:
    We only avoid bookmark lookups, when filtering for Item No.
    If we filter for Item No. plus some other fileds chances are good that SQL Server will use a secondary index plus bookmark lookups.
    How do you avoid "Bookmark Lookups"? If NAV is firing a SELECT * you'll always have a key lookup, except if your index is covering - means in this case: it INCLUDES all (!) other fields ... Creating such an index on T32 is something a absolutely would NOT recommend ...
    ndbcs wrote:
    There maybe situations where all those overhead is justified, but those are very rare.
    Not so rare in my experience - and I deal with this every day.

    To sum up:
    Changing a Clustered Index is something which needs to be discussed within context of the current problem. There are advantages and disadvantages which have to be compared (I guess Tobias and I gave an example for such a discussion) - what's a solution for one system might be a problem for another; with NAV/SQL performance there's hardly something like "IF THIS HAPPENS THEN DO THAT", it's mostly "IT DEPENDS" ...
    Thus, having all this in mind: just test to find out if this could help you!
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • ndbcsndbcs Member Posts: 31
    Jörg, thanks for your constructive reply.

    I see your points.
    If I'll find the time I will do some tests to find out, if they overrule my points :-)
    How do you avoid "Bookmark Lookups"? If NAV is firing a SELECT * you'll always have a key lookup, except if your index is covering - means in this case: it INCLUDES all (!) other fields ... Creating such an index on T32 is something a absolutely would NOT recommend ...

    What I would say is:
    We avoid "Bookmark Lookups" when changing CI to "Item No.", "Entry No.".
    So that's the one big advantage of changing the CI.


    Tobias
  • strykstryk Member Posts: 645
    Ah, now I understand! Yep, I agree ...
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
Sign In or Register to comment.