Dear Experts,
In codeunit 1013 Job Jnl.-Post Batch I want to put some code after COMMIT to make sure that previous posting will never rollback.
In COMMIT description we can read that there can split write transaction.
COMMIT Function (Database)
BeginWriteTransactions
(C/AL Statements) // Transaction 1
COMMIT
(C/AL Statements) // Transaction 2
EndWriteTransactions
What am I missing?
JobLedgEntry.LOCKTABLE;
IF JobLedgEntry.FIND('+') THEN;
.
.
COMMIT;
// I want to put my code here and make another COMMIT, but COMMIT releases the lock and when other batch is waiting to post then is Locking this table and I'm receiving Table lock issue here; note: to replicate this issue you need to run batch 50 lines+ and second batch run right after first one, you will receive Table lock, and then run second batch second time before end of the first batch to make sure that time out is enough to start posting second batch. Then second batch will run and first batch will receive table lock issue with rolback to first COMMIT
JobLedgEntry.Enything := TRUE;
JobLedgEntry.MODIFY;
COMMIT;
0
Comments
COMMIT releases all locks, yes. If you need to retain ownership of the ressources you previously locked after COMMIT, you would need to do this with a semaphore variable... like a record for example. You must make sure that all code that wants to lock your ressources does this, though.
with best regards
Jens
We already found a workaround to our issue, but I still want to find a solution for this issue.
I have a use case now, so there it is. Retaining ownership after commit is of course a matter of good conduct of all participants, so no magic bullet here.
My use case: The NAV2009 to NAV2015 upgrade code (part 2, "Data Upgrade" in the NAV2015 environment) allows for concurrent processing of several companies. That almost never works, because there is code that marks some objects for later deletion, and modifies them. To avoid this you need a record in the database that describes the "owned" state to all concurrent processes trying to run MarkTablesForDeletion(), so that it gets only executed in one of the concurrent processes.
What we need:
1. A table. The primary key is an empty code, so "record exists" would be enough to deny ownership to others. To make sure that our process has created it (and I have no idea what session ID a data upgrade has, or if it is full featured headless C/Side (doubt it)), we add a unique identifier - a GUID which we we create in our process and which we can verify, because no other process should have it.
Additionally, we need a function that does the acquisition of the lock for us:
This is code that was written with the expectation that it can get interrupted at any time. Therefore the checking stages:
- insert must work
- retrieve can get an empty GUID (somebody else has acquired and released the lock in between)
- our own GUID must be written and read back again
- repetitive tries, but not endlessly
How is this used:
Please note that the whole code does not use commits at all. You could also make a (not very different) version with commits, but it has the side effect that it would commit other data manipulations, too. This is not desired, most of the time.
with best regards
Jens
[edit] removed a bug that explains some strangeness I've experienced: The GlobalLock record was left in the table even if it should be deleted afterwards. It's better to leave it in there anyway, because concurrency, and you don't know which company has won the race. If it was a small one, it could be ready before the last thread even started, and you end up doing the mods twice. [/edit]
I had given up on parallel execution because of the errors, but had not analysed why.
I hope Microsoft will test a fix to these problems more exhaustively since for very large databases, the time savings could be important.
I have seen other time duration complaints on dimension conversion which should be addressed as well.
So far my upgrades have been with small databases - 5GB or less, and time duration has not been a problem.
http://mibuso.com/blogs/davidmachanick/
I had the misfortune to do the upgrade run again... and found out that the proposed use case is effectively useless in this context.
Here's what I found out:
1. In the "Schema Synchronization / Upgrade" context, any insert/modify/delete/rename that leads to a concurrent change of the same record will result in a runtime error. And that's regardless of using the "if record.modify then" statements, the process will stop.
2. Also, locktable() will result in a runtime error if it fails to lock the record.
If the process must stay alive, these commands are out. The end result is that no locking control is available in this context.
But there's a simple solution in this case: For a parallel upgrade, only one process (company) should do the modifications on global table data and the tables itself. It can be checkd if the process running is the first company in the the company table... so, grant the right to modify global tables only to the first company, and it will work. As it finally did in my re-runs of the upgrade routine.
with best regards
Jens