SETCURRENTKEY - better performance

AngeloAngelo Member Posts: 180
Hi Master

I want to setrange/setfilter navision table and want to set current key first.
if the table have secondarykey A,B,C,D,E.

My code :
tbl.setcurrentkey(a,b,c,d);
tbl.setrange(A,'aa');
tbl.setrange(B,'bb');
tbl.setrange(D,'dd');

still have good performance if I select the key?

In another case if my code :
tbl.setcurrentkey(a,b,c,d);
tbl.setrange(B,'bb');
tbl.setrange(D,'dd');

still have good performance for this case?


Pls advise....

rgds,
Angelo

Comments

  • Alex_ChowAlex_Chow Member Posts: 5,063
    Yep. You'll still get good performance in this case. Although probably not as good as your first scenario.
  • krikikriki Member, Moderator Posts: 9,110
    Performance for your code depends on the data.
    1)
    1.A)Lets say you have 10 records in your table, you will ALWAYS have a good performance.
    1.B)Lets say you have 1.000.000 records in your table. If with your filter on A and B,
    1.B.1) you remain with 10 records to scan, you will have a perfect performance.
    1.B.2) you remain with 1000 records to scan, you will have a quite good performance.
    1.B.3) If you have still 500.000 records to scan, you will have a bad performance. But always better then using the primary key that probably will scan 1.000.000 records

    2)
    2.A)Lets say you have 10 records in your table, you will ALWAYS have a good performance.
    2.B)Lets say you have 1.000.000 records in your table. If with your filter on A and B,
    2.B.1) you select 500.000 records, you will have a good performance (you need 1/2 of the records, so even with scanning the whole table, doesn't change a lot).
    2.B.2) you select 1/10.000 records, you will have a quite bad performance (1 record out of , but still better then using the primary key. I would think about creating a key on B or/and D if they are selective (meaning if filtering on a field, you remain with only a small part (max. a few 1000 records to scan for the next filter), you have a quite good index.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • AngeloAngelo Member Posts: 180
    Kriki,

    thx a lot for your explanation. :)

    for no 2, do you mean filter on B and D,right? you write filter A and B also in no 2. I think make a new key every time we want to filter match with we want is not good idea especially on ledger table but how about creating new key in document table(sales,purchase header,line) and Master table which have a lot of records respectively?


    rgds,
    Angelo
  • krikikriki Member, Moderator Posts: 9,110
    Angelo wrote:
    Kriki,

    thx a lot for your explanation. :)

    for no 2, do you mean filter on B and D,right? you write filter A and B also in no 2. I think make a new key every time we want to filter match with we want is not good idea especially on ledger table but how about creating new key in document table(sales,purchase header,line) and Master table which have a lot of records respectively?


    rgds,
    Angelo
    The 1.A,1.B,2.A,2.B do NOT refer to your fields but are just points under 1 or 2.

    -Creating new keys on entry-tables most of the time is not a good idea, because they need to be updated often.
    -Creating a key in a master table (like item) in general is not such a problem, because those tables do not change so often (for most customers at least).
    -Document headers are tables that change more often then master tables but a lot less then entry-tables, so the influence for maintaining it is not so big.
    -document lines are tables that change often (also for posting), but less then entry-tables. In general it is better to avoid creating new indexes.

    Also : in general, if you don't need a new key for fast searching if you find another key in which the first fields are used by your query and are quite selective.
    If you need a key for sorting (specially for reports), on a SQL-DB you can create the key and not maintain it in SQL. For a Navision-DB, it is better to create a report that first reads the data in a fast way and stores it in temptables and the read the temptables in the order you need to print.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • AngeloAngelo Member Posts: 180
    Most often, I need the appropriate key for searching purpose. The data is quite huge and performance is not very good. You are right that add new index in ledger table is not good. that' s why I try to find keys in standard NAV and match to my filter for searching but the problem is keys in standard NAV not all match with my filter as I said before time using example(field A,B,C and D). Only 1 or 2 match with the key as you saw in my example before and I want to know the performance because I feel the system still slow to run. :-k
  • krikikriki Member, Moderator Posts: 9,110
    Angelo wrote:
    ...I want to know the performance because I feel the system still slow to run. :-k
    There can be a lot of reasons.
    -What DB are you using (NAV or SQL)?
    -What is the size of the DB?
    -RAIDn?
    -memory?
    -...
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • AngeloAngelo Member Posts: 180
    DBSize is 28327104KB using SQL server 2000
    Raid 1 for data, raid 0+1 for Log. RAM 6GB.
  • navisi0nenavisi0ne Member Posts: 36
    on SQL Server, the setcurrentkey function is useless to improve the performance of your find (it is just an Order by clause), it's SQL server that choose the index to use, is not your setcurrentkey. As Kriki wrote You could optimize your indexes and use the most selective fields as filters, but you can find a lot fo discussion on that in this forum. Also you can take a look at these webcast:
    http://www.mibuso.com/dlinfo.asp?FileID=808
    http://www.mibuso.com/dlinfo.asp?FileID=809
  • krikikriki Member, Moderator Posts: 9,110
    Angelo wrote:
    DBSize is 28327104KB using SQL server 2000
    Raid 1 for data, raid 0+1 for Log. RAM 6GB.

    raid 0+1 for log? So at least 4 disks for log.

    It would be better 4 disks in RAID10 for DB and 2 in RAID1 for log.

    SQLServer is using all the memory?
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


Sign In or Register to comment.