Report coding to avoid long-running table locks-use COMMIT?

rsaritzkyrsaritzky Member Posts: 469
Hi:

I have a couple of processing-only reports. One of them creates Purchase Orders (both Purchase Header records and Purchase line records) by reading a table of imported records. The volume is high (500+ purchase orders created at a time), so the report runs for awhile.

While the report is running, other users cannot post because the "Purchase Line table is in use."

The report is simple. It reads another table and creates Purchase Header/Line records based on the contents of the record just read, then repeats.

Am I correct that if I insert a COMMIT statement at the end of the OnAfterGetRecord trigger (where the INSERT's to the Purchase Header and Line table occur), that NAV will "release" the table lock after each insert? Or, as I have read in other threads, must I move the INSERT code to a separate codeunit which is called each time?

Similarly, I have a report that goes through Vendor Ledger Entry records and fixes 1099-related data. While the report is running, users cannot post.

Thanks

Ron
Ron

Comments

  • kapamaroukapamarou Member Posts: 1,152
    Yes a commit would release the resources. I have done this many times in combinations with SLEEP. I have a report that creates and posts thousands of sales orders and runs at the same time as users are posting.

    Just remember that COMMIT must enclose a complete-logical transaction.

    So for example: Create the Header, Create the lines, Create dimensions etc... then modify your table to note that the data were processed and then COMMIT.
  • rsaritzkyrsaritzky Member Posts: 469
    kapamarou wrote:
    Yes a commit would release the resources.. . then modify your table to note that the data were processed and then COMMIT.

    Exactly the answer I was looking for. For me - easy. It's the last statement in the trigger now. Thanks for the reply.


    Ron
    Ron
  • DenSterDenSter Member Posts: 8,305
    It's not a panacea though, it can be a tricky balance. If you're posting 500 orders, and doing 500 COMMITs, that might actually cause problems. Sure it releases the lock, but the COMMIT itself uses resources itself, and in a process of posting 500 orders it locks the same tables right away for the next order. If it doesn't give you the performance you're looking for, try to COMMIT only every 25 or 50 orders instead of after every single one.
  • rsaritzkyrsaritzky Member Posts: 469
    DenSter wrote:
    ..If it doesn't give you the performance you're looking for, try to COMMIT only every 25 or 50 orders instead of after every single one.

    Good suggestion! My primary issue was with letting users post - they were getting "table locked by another user..." My idea is if I COMMIT after each transaction, it gives other users time to get their own lock. Seems like COMMIT'ing every 50, then SLEEPING for a few seconds (to allow other users to get in) sounds like the best combination of techniques.

    Thanks to all!

    Ron
    Ron
  • DenSterDenSter Member Posts: 8,305
    Locking issues can't be completely eliminated. The best defence against locking is not programming in COMMIT points, it's speeding up your transactions. Your first order of business should be to put in place proper database maintenance (daily stats update, weekly reindex), you would be surprised how much effect reindexing can have. Then, you do a full index tuning project and get rid of unused indexes (turn off MaintainSQLIndex, don't disable keys).

    Oh and that is assuming that your server is configured properly, and your database files are separated the right way. You might already know all about this, I don't know how well you know this kind of thing :)
Sign In or Register to comment.