I have not seen this discussed and am wondering if it is even possible to do.
I have a table that I import data into several times during the day. The import routine takes a few minutes to complete and, during this time, I don't want the table accessed by anyone, for any reason. This is because the table is used to run a report and, during the dataport import, the table is cleared and rebuilt from the imported data. Since this takes many minutes (due to the volume of data), anyone running a report on an imcomplete table will not get good data. Also, I don't want anyone to start the import process if it is already in process (multiple people have the abiltity to do run the dataport).
I've tried to use the LOCKTABLE function but it does not work. Users can still run the report while the table is "locked".
I've tried updating a record in a second table indicating that the dataport was in process and then checking the status of this record at the start of the dataport and the report. This doesn't work either because the first dataport responsible for updating the status record can be cancelled or experience an error and the status record never gets set back (i.e. it forever stays in "in process status" thereby blocking all further attempts to rerun the dataport).
What I need is a function that is tied to the first dataport that automatically clears when the dataport finishes (for whatever reason) and communicates to other copies of the dataport and appropriate reports.
Any ideas?
0
Comments
I really don't like this type of logic though, I think I'd try to take a close look at the business logic and see if this is done properly.
RIS Plus, LLC
MVP - Business Apps
2) You can change the transaction type on the reports to prevent dirty reads and thus not read data which are not commited. Reports are working over uncommitted data by default.
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
I already tried this and found that it does not work when the dataport ends unexpectedly because the status record never gets set back to "dataport is not running".
Without knowing the actual business requirement though we're all just spewing ideas, and most of them will have one problem or another
RIS Plus, LLC
MVP - Business Apps
First you need a lock table:
User ID, Code20
Session ID, Integer
Login Date, Date
Login Time, Time
Plus a primary key.
To lock a record you first check the existing record in the Session Table. If exactly the same record exists (ie all four fields match) the resource is locked. If the session is your session it's locked by you. Otherwise you can lock it yourself by copying the fields from your session record.
For the normal usage (a NAS that restarts or dies when it has an error) this is enough. For a NAV client you need an addition to trap the case where the job dies but the session doesn't.
Probably the best way to do this is to put a CONSISTENT(FALSE) against the lock table while it's locked, this way any commit will fail and rollback the lock. But if it were that easy you wouldn't need a manual lock.
If you need to do a commit but keep the lock you can set another flag to say that the data is currently inconsistent but the user who has the lock is building it. Ie set the flag, CONSISTENT(TRUE), commit, CONSISTENT(FALSE), clear the flag. The only problem with that is that a user who has the lock will keep it until their session is killed. (Perhaps by an idle timeout?)
To do the shared read locks you just use the normal technique of grabbing the exclusive lock while you add or check the list of shared locks; you have to check for dead sessions just like the exclusive lock. You need to do the CONSISTENT thing with the read locks too so if you must commit you have to clear your lock while running the commit and add yet another field somewhere to give the data a generation number. Whenever the data is updated the generation number is incremented, if your recreation of the read lock spots a new generation number you must abort or face inconsistent data.
All in all manual locking is a pain and should be avoided if at all possible!
The simplest way of avoiding this is the CONSISTENT(false) operation, that way you can prevent the commit of inconsistent data and this should be sufficient on the Native database because of the "versioning".
TVision Technology Ltd
My idea would be to import and process the data and then write it to a temptable WITHOUT locking anything.
Once this is finished, you can lock the table, throw away all records and flush the temptable in the real table.
But this is only useful when it is NOT the number of records that makes it so slow.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!