hi all,
I have a large update routine to go through the GL Entry and update some global dimensions based on other fields in the table. The logic is relatively simple, but when I run the routine, I get to about 300,000 in 2 hours then it comes to a screeching halt. The next 200,000 records have taken about 18 hours so far. I have another 500,000 records to go through and I'm trying to figure out why it slows down so much. I've added some COMMIT commands to the code so that we dont use up all the empty space in the db, but I'm at a loss at to why it basically stops. The machine's resources are nowhere near capacity.
Anyone have any ideas?
thanks.
Hans
0
Comments
Have you tried running the routine on the server machine itself, when there are no users? This may be related to a network problem. Hundreds of thousands of records is a lot of information to pump back and forth.
RIS Plus, LLC
Make sure that your rec variables use the most efficient keys available. Think about maybe adding a key to speed up your code.
Also, the order in which you set filter values makes a difference. Set the filters on the field that eliminates the most records first. If there is no key with the fields in that order, create one.
RIS Plus, LLC
thanks for the ideas though.
the Key´s where Global Dimens are in.
If you can do, try to Disable all Keys (Exept Primary Key) for your
Operation.
After Sucess of Operation Enable the Keys.
RIS Plus, LLC
Probably you are running out of COMMIT-cache.
How does all of this works and what is happening:
Maximum 2/3 of your DB-cache can be used for COMMIT-cache. In general when you COMMIT, it is COMMITED in DB-memory but NOT yet on disk.
When the DB has time, it writes the transactions in memory to disk. Problem is that you are committing faster then he can write to disk.
In general if a record is in the COMMIT-cache for more than a minute, the server gives more priority to writing to disk than before. (If a crash happens in this moment, all the commited transactions in memory are lost!!!!!)
If you run out of COMMIT-cache, before a new transaction can be written, the DB has first to write some transactions to disk to make place in memory. At this point the DB-speed is +- the speed of a DB without COMMIT-cache.
In table 2000000010 , field "Writes in Queue" shows how many writes are in the COMMIT-cache to be written on this database-file.
The following code can be used to fully occupy the DB, but without filling the COMMIT-cache and blocking other users.
Use function "PerformanceSleep()" just after a COMMIT in your code. This function can be used in SQL without receiving an error-message, but will not do anything.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
I already used it on different versions of Navision, starting with 2.01. And it works.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
To make it run faster, he needs to do 1 or more of the following:
-increase the DB-cache, so the COMMIT-cache will reach its limit later
-put faster disks, so the DB-server can write the data faster to disk
-put more disks and divide the database in more database-files and put 1 per disk, so the DB-server can write the data faster to disk, so the COMMIT-cache will reach its limit later
-use RAID1+0, (especially the striping is important) so the DB-server can write the data faster to disk, so the COMMIT-cache will reach its limit later
This code serves to run a long procedure that writes to the DB. In this way, you can run it when you want without blocking the other clients when they start something.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
If I understood well, he just takes a record, moves 2 fields to other fields and clears the first 2 and saves the record.
If he takes 20 records at a time, the users won't even see a block, thanks to the commit-cache. (did something like this in Sales Lines-table and no user noticed it). If he does 10000 records at a time, they will notice it.
Something to consider. The fewer records you take at a time, the less the users will be blocked, BUT the longer the procedure will take. This because each COMMIT has some overhead. So changing 1000 records in 1000 transactions is slower than 1000 records in 1 transaction. But the table is locked for the processing of only 1 record or 1000 records.....
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!