Blocking all access during dataport import

FlowerBiz
Member Posts: 34
I have not seen this discussed and am wondering if it is even possible to do.
I have a table that I import data into several times during the day. The import routine takes a few minutes to complete and, during this time, I don't want the table accessed by anyone, for any reason. This is because the table is used to run a report and, during the dataport import, the table is cleared and rebuilt from the imported data. Since this takes many minutes (due to the volume of data), anyone running a report on an imcomplete table will not get good data. Also, I don't want anyone to start the import process if it is already in process (multiple people have the abiltity to do run the dataport).
I've tried to use the LOCKTABLE function but it does not work. Users can still run the report while the table is "locked".
I've tried updating a record in a second table indicating that the dataport was in process and then checking the status of this record at the start of the dataport and the report. This doesn't work either because the first dataport responsible for updating the status record can be cancelled or experience an error and the status record never gets set back (i.e. it forever stays in "in process status" thereby blocking all further attempts to rerun the dataport).
What I need is a function that is tied to the first dataport that automatically clears when the dataport finishes (for whatever reason) and communicates to other copies of the dataport and appropriate reports.
Any ideas?
I have a table that I import data into several times during the day. The import routine takes a few minutes to complete and, during this time, I don't want the table accessed by anyone, for any reason. This is because the table is used to run a report and, during the dataport import, the table is cleared and rebuilt from the imported data. Since this takes many minutes (due to the volume of data), anyone running a report on an imcomplete table will not get good data. Also, I don't want anyone to start the import process if it is already in process (multiple people have the abiltity to do run the dataport).
I've tried to use the LOCKTABLE function but it does not work. Users can still run the report while the table is "locked".
I've tried updating a record in a second table indicating that the dataport was in process and then checking the status of this record at the start of the dataport and the report. This doesn't work either because the first dataport responsible for updating the status record can be cancelled or experience an error and the status record never gets set back (i.e. it forever stays in "in process status" thereby blocking all further attempts to rerun the dataport).
What I need is a function that is tied to the first dataport that automatically clears when the dataport finishes (for whatever reason) and communicates to other copies of the dataport and appropriate reports.
Any ideas?
0
Comments
-
You could put a couple of flags in some setup table that you check at the start of the dataport/report. One flag is called "report is running" and the other one "dataport is running". If the "dataport is running" flag is checked, you can't run the report, and vice versa. If either flag is not checked, you check the right one and then continue processing, unchecking it at the end of the dataport/report. That way you can never run the report while the dataport is running, or run the dataport while someone is running the report. You could then even build logic to prevent the dataport to run more than once at the same time.
I really don't like this type of logic though, I think I'd try to take a close look at the business logic and see if this is done properly.0 -
1) NAV DB or SQL?
2) You can change the transaction type on the reports to prevent dirty reads and thus not read data which are not commited. Reports are working over uncommitted data by default.0 -
kine wrote:1) NAV DB or SQL?
2) You can change the transaction type on the reports to prevent dirty reads and thus not read data which are not commited. Reports are working over uncommitted data by default.DenSter wrote:You could put a couple of flags in some setup table that you check at the start of the dataport/report. One flag is called "report is running" and the other one "dataport is running". If the "dataport is running" flag is checked, you can't run the report, and vice versa. If either flag is not checked, you check the right one and then continue processing, unchecking it at the end of the dataport/report. That way you can never run the report while the dataport is running, or run the dataport while someone is running the report. You could then even build logic to prevent the dataport to run more than once at the same time.0 -
FlowerBiz wrote:I already tried this and found that it does not work when the dataport ends unexpectedly because the status record never gets set back to "dataport is not running".DenSter wrote:I really don't like this type of logic though, I think I'd try to take a close look at the business logic and see if this is done properly.
Without knowing the actual business requirement though we're all just spewing ideas, and most of them will have one problem or another0 -
I don't work with the native database much these days, but shouldn't the versioning handle this? If User B starts the report while user A is running the dataport wouldn't User B get the last committed version of the data? The data prior to User A beginning the dataport.There are no bugs - only undocumented features.0
-
Sounds like what's wanted here is a manual exclusive lock, not too difficult but laborious.
First you need a lock table:
User ID, Code20
Session ID, Integer
Login Date, Date
Login Time, Time
Plus a primary key.
To lock a record you first check the existing record in the Session Table. If exactly the same record exists (ie all four fields match) the resource is locked. If the session is your session it's locked by you. Otherwise you can lock it yourself by copying the fields from your session record.
For the normal usage (a NAS that restarts or dies when it has an error) this is enough. For a NAV client you need an addition to trap the case where the job dies but the session doesn't.
Probably the best way to do this is to put a CONSISTENT(FALSE) against the lock table while it's locked, this way any commit will fail and rollback the lock. But if it were that easy you wouldn't need a manual lock.
If you need to do a commit but keep the lock you can set another flag to say that the data is currently inconsistent but the user who has the lock is building it. Ie set the flag, CONSISTENT(TRUE), commit, CONSISTENT(FALSE), clear the flag. The only problem with that is that a user who has the lock will keep it until their session is killed. (Perhaps by an idle timeout?)
To do the shared read locks you just use the normal technique of grabbing the exclusive lock while you add or check the list of shared locks; you have to check for dead sessions just like the exclusive lock. You need to do the CONSISTENT thing with the read locks too so if you must commit you have to clear your lock while running the commit and add yet another field somewhere to give the data a generation number. Whenever the data is updated the generation number is incremented, if your recreation of the read lock spots a new generation number you must abort or face inconsistent data.
All in all manual locking is a pain and should be avoided if at all possible!
The simplest way of avoiding this is the CONSISTENT(false) operation, that way you can prevent the commit of inconsistent data and this should be sufficient on the Native database because of the "versioning".Robert de Bath
TVision Technology Ltd0 -
Do you need minutes because there are so many records to import or because there is a lot of processing to do?
My idea would be to import and process the data and then write it to a temptable WITHOUT locking anything.
Once this is finished, you can lock the table, throw away all records and flush the temptable in the real table.
But this is only useful when it is NOT the number of records that makes it so slow.Regards,Alain Krikilion
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!0
Categories
- All Categories
- 73 General
- 73 Announcements
- 66.6K Microsoft Dynamics NAV
- 18.7K NAV Three Tier
- 38.4K NAV/Navision Classic Client
- 3.6K Navision Attain
- 2.4K Navision Financials
- 116 Navision DOS
- 851 Navision e-Commerce
- 1K NAV Tips & Tricks
- 772 NAV Dutch speaking only
- 617 NAV Courses, Exams & Certification
- 2K Microsoft Dynamics-Other
- 1.5K Dynamics AX
- 320 Dynamics CRM
- 111 Dynamics GP
- 10 Dynamics SL
- 1.5K Other
- 990 SQL General
- 383 SQL Performance
- 34 SQL Tips & Tricks
- 35 Design Patterns (General & Best Practices)
- 1 Architectural Patterns
- 10 Design Patterns
- 5 Implementation Patterns
- 53 3rd Party Products, Services & Events
- 1.6K General
- 1.1K General Chat
- 1.6K Website
- 83 Testing
- 1.2K Download section
- 23 How Tos section
- 252 Feedback
- 12 NAV TechDays 2013 Sessions
- 13 NAV TechDays 2012 Sessions