Options

Large update routine performs well then goes extremely slow

HansSoloHansSolo Member Posts: 15
hi all,
I have a large update routine to go through the GL Entry and update some global dimensions based on other fields in the table. The logic is relatively simple, but when I run the routine, I get to about 300,000 in 2 hours then it comes to a screeching halt. The next 200,000 records have taken about 18 hours so far. I have another 500,000 records to go through and I'm trying to figure out why it slows down so much. I've added some COMMIT commands to the code so that we dont use up all the empty space in the db, but I'm at a loss at to why it basically stops. The machine's resources are nowhere near capacity.

Anyone have any ideas?

thanks.
Hans

Comments

  • Options
    DenSterDenSter Member Posts: 8,304
    I was going to suggest a COMMIT at strategic points in your code, but it looks like you are doing that already.

    Have you tried running the routine on the server machine itself, when there are no users? This may be related to a network problem. Hundreds of thousands of records is a lot of information to pump back and forth.
  • Options
    HansSoloHansSolo Member Posts: 15
    Yes, it is running on a stand-alone straight to the db file. I'm extremely puzzled as to why it runs so fast, then stops.
  • Options
    DenSterDenSter Member Posts: 8,304
    Well it may be as simple as that the first 200K entries don't have as many dimension values to update and from there on there are 10 of them for each entry.

    Make sure that your rec variables use the most efficient keys available. Think about maybe adding a key to speed up your code.

    Also, the order in which you set filter values makes a difference. Set the filters on the field that eliminates the most records first. If there is no key with the fields in that order, create one.
  • Options
    HansSoloHansSolo Member Posts: 15
    Yes, I'm aware of all that...its a straight forward routine that runs through every entry without any filters and basically transfers the contents of 2 fields to 2 others fields and clears out the initial 2 fields. I was hoping there was something setup related that I wasnt aware of because the routine itself is very basic. :?

    thanks for the ideas though. :wink:
  • Options
    IngoRothIngoRoth Member Posts: 6
    IF you Change the Global Dimensions Navision must change also
    the Key´s where Global Dimens are in.
    If you can do, try to Disable all Keys (Exept Primary Key) for your
    Operation.
    After Sucess of Operation Enable the Keys.
    Ingo Roth
  • Options
    DenSterDenSter Member Posts: 8,304
    I think I would try to do the routine in steps, and only do the entries for a certain date range, maybe do them one year at a time, or one quarter at a time.
  • Options
    HansSoloHansSolo Member Posts: 15
    That is our next approach...but we were hoping to just hit run and come back in a few hours. One thing I notice is that the slave.exe seems to be taking up a lot of memory...I'm wondering if it cant keep up with the routine for some reason.
  • Options
    krikikriki Member, Moderator Posts: 9,090
    You can try to increase the DB-cache.

    Probably you are running out of COMMIT-cache.
    How does all of this works and what is happening:
    Maximum 2/3 of your DB-cache can be used for COMMIT-cache. In general when you COMMIT, it is COMMITED in DB-memory but NOT yet on disk.
    When the DB has time, it writes the transactions in memory to disk. Problem is that you are committing faster then he can write to disk.
    In general if a record is in the COMMIT-cache for more than a minute, the server gives more priority to writing to disk than before. (If a crash happens in this moment, all the commited transactions in memory are lost!!!!!)
    If you run out of COMMIT-cache, before a new transaction can be written, the DB has first to write some transactions to disk to make place in memory. At this point the DB-speed is +- the speed of a DB without COMMIT-cache.
    In table 2000000010 , field "Writes in Queue" shows how many writes are in the COMMIT-cache to be written on this database-file.

    The following code can be used to fully occupy the DB, but without filling the COMMIT-cache and blocking other users.

    Use function "PerformanceSleep()" just after a COMMIT in your code. This function can be used in SQL without receiving an error-message, but will not do anything.

    PerformanceIndication() : Integer
    // PerformanceIndication
    // gives an indication of the performance of the server
    // IMPORTANT : NOT usable with SQL-server
    // PARAMETERS
    //   RETURN-VALUE : indication of performance of the server
    //                  0 : the server is doing (almost) nothing
    //                  1 : no performance problems, but can become bad very fast
    //                  2 : no performance problems, but system is very busy
    //                  3 : performance problems very likely
    //   ex  : writing of thousands of records in multiple transactions
    //         every few records : do this test
    //         FOR i := 1 TO 100000 DO BEGIN
    //           write record;
    //           IF i MOD 200 THEN BEGIN
    //             COMMIT;
    //             CASE PerformanceIndication() OF
    //               0 : BEGIN END;
    //               1 : SLEEP(1000); // 1 second
    //               2 : SLEEP(5000); // 5 seconds
    //               ELSE BEGIN
    //                 j := 0;
    //                 WHILE j < 3 DO BEGIN // wait until performance is ok, or until 1/2 minute has passed
    //                   SLEEP(10000);
    //                   j := j + 1;
    //                   IF PerformanceIndication() < 3 THEN
    //                     j := 3;
    //                 END;
    //               END;
    //           END;
    //         END;
    
    LintTestDiskAccess := 0;
    
    LrecDatabaseFile.RESET;
    LrecDatabaseFile.SETCURRENTKEY("No.");
    IF LrecDatabaseFile.FIND('-') THEN
      REPEAT
        IF (LrecDatabaseFile."Writes in Queue" >= 5000) OR
           (LrecDatabaseFile."Reads in Queue" >= 16) THEN BEGIN
          LintTestDiskAccess := 3;
        END
        ELSE IF (LrecDatabaseFile."Writes in Queue" >= 2000) OR
                (LrecDatabaseFile."Reads in Queue" >= 7) THEN BEGIN
          IF LintTestDiskAccess < 2 THEN LintTestDiskAccess := 2;
        END
        ELSE IF (LrecDatabaseFile."Writes in Queue" >= 500) OR
                (LrecDatabaseFile."Reads in Queue" >= 3) THEN BEGIN
          IF LintTestDiskAccess < 1 THEN LintTestDiskAccess := 1;
        END;
      UNTIL LrecDatabaseFile.NEXT = 0;
    
    EXIT(LintTestDiskAccess);
    
    PerformanceSleep()
    // PerformanceSleep //*** 019
    // IMPORTANT : ONLY USABLE BY NATIVE-SERVER
    // Tests the performance and if performance-indication is > 0 stays in this function via REPEAT and SLEEP
    // this function tests every 5 seconds if performance-indication is 0,
    // if it is or it is waiting for 1 minute, than it returns returns back to the calling program
    
    IF lrecObject.RECORDLEVELLOCKING THEN
      EXIT;
    
    lint := 0;
    WHILE PerformanceIndication() > 0 DO BEGIN
      SLEEP(5000);
      lint := lint + 1;
      IF lint >= 12 THEN
        EXIT;
    END;
    
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    ajhvdbajhvdb Member Posts: 672
    Does this code really work? Have you tried it?
  • Options
    krikikriki Member, Moderator Posts: 9,090
    yes, it really works.
    I already used it on different versions of Navision, starting with 2.01. And it works.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    ajhvdbajhvdb Member Posts: 672
    I'm impressed, good thinking. So Hans example should be running how much faster?
  • Options
    krikikriki Member, Moderator Posts: 9,090
    No, it won't run faster.
    To make it run faster, he needs to do 1 or more of the following:
    -increase the DB-cache, so the COMMIT-cache will reach its limit later
    -put faster disks, so the DB-server can write the data faster to disk
    -put more disks and divide the database in more database-files and put 1 per disk, so the DB-server can write the data faster to disk, so the COMMIT-cache will reach its limit later
    -use RAID1+0, (especially the striping is important) so the DB-server can write the data faster to disk, so the COMMIT-cache will reach its limit later

    This code serves to run a long procedure that writes to the DB. In this way, you can run it when you want without blocking the other clients when they start something.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    ajhvdbajhvdb Member Posts: 672
    So Hans batch can run for 18 hours (and more) and NOT lock any table for more then a few seconds.
  • Options
    krikikriki Member, Moderator Posts: 9,090
    Exactly, but this largely depends on how many records he changes per COMMIT/PerformanceSleep-loop and how long it takes to change a record.

    If I understood well, he just takes a record, moves 2 fields to other fields and clears the first 2 and saves the record.
    If he takes 20 records at a time, the users won't even see a block, thanks to the commit-cache. (did something like this in Sales Lines-table and no user noticed it). If he does 10000 records at a time, they will notice it.

    Something to consider. The fewer records you take at a time, the less the users will be blocked, BUT the longer the procedure will take. This because each COMMIT has some overhead. So changing 1000 records in 1000 transactions is slower than 1000 records in 1 transaction. But the table is locked for the processing of only 1 record or 1000 records.....
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


Sign In or Register to comment.