Activity Log Entry

AngeloAngelo Member Posts: 180
edited 2008-02-26 in SQL Performance
I'm using Testing duration using activity log entry. for sample, I try to use Sales Header.
Total records = 52418 rows.

my codes :
ActivityLog.Description := 'Loop 5';
ActivityLog.Start;

SH.SETFILTER(SH."No.",'<%1',xxxx);
IF SH.FINDSET THEN
  REPEAT
    ActivityLog."No. of Operations" := ActivityLog."No. of Operations" + 1;
  UNTIL SH.NEXT = 0;

ActivityLog.Stop;

testing 1 :
Keys = Doc type,No *** clustered = yes
Duration = 8900s ms

testing 2 :
keys = doc type,No *** clustered = No
new keys = No,Doc type *** clustered = yes
Duration = 9300s ms

testing 3 :
Keys = Doc type,No *** clustered = yes *** sqlindex = No,Doc Type
Duration = 17000s ms

The theory said that using testing 2 is the best because when we show statistic for this key will produce low density and if we are not setcurrentkey, sql will select this key itself (most selectivity) but why in my testing 2,time duration more higher than testing 1(high selectivity in sql statistic)
Another theory also said using sql index will boost our performance but why in my testing 3, the result is the worst? ](*,) ](*,) ](*,)

Any explaination about this experiment?
[/b]

Comments

  • kinekine Member Posts: 12,562
    It is because the "<" is used And because it means that "take everything from beginning to my value", SQL will sometime use the scan instead seek because it will be faster... You need to calculate the "cost" not just for yor value but for the possibility that something is lesser than entered value... and SQL can do that in many ways...
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • AngeloAngelo Member Posts: 180
    thus, how to prove all our theories of indexing that I've read from navision tuning paper(patrice dupont),webcast(hynek),and sql performance forum,etc. I have used client monitor,sql profiler, and activity log entry but the result not satisfied and can not prove the theory about tuning indexing to make it best performance.
    I have done testing a lot of times,try to changing indexing, try and error and the result not same as theory said.:-k

    need advices from Master Navision who experienced in tuning indexing... [-o<
  • krikikriki Member, Moderator Posts: 9,086
    [Topic moved from Navision forum to SQL Performance forum]
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • kinekine Member Posts: 12,562
    Angelo wrote:
    thus, how to prove all our theories of indexing that I've read from navision tuning paper(patrice dupont),webcast(hynek),and sql performance forum,etc. I have used client monitor,sql profiler, and activity log entry but the result not satisfied and can not prove the theory about tuning indexing to make it best performance.
    I have done testing a lot of times,try to changing indexing, try and error and the result not same as theory said.:-k

    need advices from Master Navision who experienced in tuning indexing... [-o<

    For "testin" you selected wrong example. The effect is best to see when filtering for one exact value, not for a range like you did. In this case there much more to look at and the SQL planning engine will use many things with different priorities to create the execution plan.
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • AngeloAngelo Member Posts: 180
    wrong example? :shock:
    which example that I have to use? can you give me some codes for example and testing? my example want to prove for large data,which one is the best performance because my company's data is huge enough make it slow down.

    rgds,
    angelo
  • kinekine Member Posts: 12,562
    I mean that if you want to play with the indexes and measure the performance and make some decisions on that, you need to use easy example - like SH.SETFILTER(SH."No.",'%1',xxxx) without the "<", this will show you if SQL has good index for reading the data. If you use "<", MS SQL will behaive in different way... but to understand this, you need to have good knowledge how it all works, see behind the curtain.

    Example: if the "No." has these values ("compressed" description - number of codes beginning with given character):

    125x A*
    542x B*
    10x C*
    988x D*
    451x E*
    etc.

    and there is 2500 records in the table, and if you will filter for something like "<EXXXXX", SQL will maybe count that it is better to scan the table, because it is 84% probability that each record will be inside the filter. It will not use the non-clustered index beginning with "No." field. But if you will filter "<BXXXXX" the probability will be different and may be that SQL will use the non-clustered index...

    This is just example...

    And do not forget to repeat the measuring more times and calc the average. Because just one test is about nothing. You need to have same conditions for all tests... ("warming up" the cache etc.)
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
Sign In or Register to comment.