Blocking all access during dataport import

FlowerBizFlowerBiz Member Posts: 34
I have not seen this discussed and am wondering if it is even possible to do.

I have a table that I import data into several times during the day. The import routine takes a few minutes to complete and, during this time, I don't want the table accessed by anyone, for any reason. This is because the table is used to run a report and, during the dataport import, the table is cleared and rebuilt from the imported data. Since this takes many minutes (due to the volume of data), anyone running a report on an imcomplete table will not get good data. Also, I don't want anyone to start the import process if it is already in process (multiple people have the abiltity to do run the dataport).

I've tried to use the LOCKTABLE function but it does not work. Users can still run the report while the table is "locked".

I've tried updating a record in a second table indicating that the dataport was in process and then checking the status of this record at the start of the dataport and the report. This doesn't work either because the first dataport responsible for updating the status record can be cancelled or experience an error and the status record never gets set back (i.e. it forever stays in "in process status" thereby blocking all further attempts to rerun the dataport).

What I need is a function that is tied to the first dataport that automatically clears when the dataport finishes (for whatever reason) and communicates to other copies of the dataport and appropriate reports.

Any ideas?

Comments

  • DenSterDenSter Member Posts: 8,304
    You could put a couple of flags in some setup table that you check at the start of the dataport/report. One flag is called "report is running" and the other one "dataport is running". If the "dataport is running" flag is checked, you can't run the report, and vice versa. If either flag is not checked, you check the right one and then continue processing, unchecking it at the end of the dataport/report. That way you can never run the report while the dataport is running, or run the dataport while someone is running the report. You could then even build logic to prevent the dataport to run more than once at the same time.

    I really don't like this type of logic though, I think I'd try to take a close look at the business logic and see if this is done properly.
  • kinekine Member Posts: 12,562
    1) NAV DB or SQL?
    2) You can change the transaction type on the reports to prevent dirty reads and thus not read data which are not commited. Reports are working over uncommitted data by default.
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • FlowerBizFlowerBiz Member Posts: 34
    kine wrote:
    1) NAV DB or SQL?
    2) You can change the transaction type on the reports to prevent dirty reads and thus not read data which are not commited. Reports are working over uncommitted data by default.
    It is native Navision database. I don't know how to prevent "dirty reads" in a report but this will not prevent someone else from starting another instance of the dataport while the first one is still running. I would have thought that the LOCKTABLE at the beginning of the dataport would prevent more than one instance of the dataport but it does not.
    DenSter wrote:
    You could put a couple of flags in some setup table that you check at the start of the dataport/report. One flag is called "report is running" and the other one "dataport is running". If the "dataport is running" flag is checked, you can't run the report, and vice versa. If either flag is not checked, you check the right one and then continue processing, unchecking it at the end of the dataport/report. That way you can never run the report while the dataport is running, or run the dataport while someone is running the report. You could then even build logic to prevent the dataport to run more than once at the same time.
    I already tried this and found that it does not work when the dataport ends unexpectedly because the status record never gets set back to "dataport is not running".
  • DenSterDenSter Member Posts: 8,304
    FlowerBiz wrote:
    I already tried this and found that it does not work when the dataport ends unexpectedly because the status record never gets set back to "dataport is not running".
    Hence my comment:
    DenSter wrote:
    I really don't like this type of logic though, I think I'd try to take a close look at the business logic and see if this is done properly.
    Logic like that is very artificial and usually presents more problems than it solves. I'd personally take a closer look at the process itself and question the way that those dataports are running. It sounds to me that this dataport should only be run by one person/process, perhaps by an instance of NAS, and prevent people from stepping on other people's process.

    Without knowing the actual business requirement though we're all just spewing ideas, and most of them will have one problem or another :mrgreen:
  • bbrownbbrown Member Posts: 3,268
    I don't work with the native database much these days, but shouldn't the versioning handle this? If User B starts the report while user A is running the dataport wouldn't User B get the last committed version of the data? The data prior to User A beginning the dataport.
    There are no bugs - only undocumented features.
  • rdebathrdebath Member Posts: 383
    Sounds like what's wanted here is a manual exclusive lock, not too difficult but laborious.

    First you need a lock table:
    User ID, Code20
    Session ID, Integer
    Login Date, Date
    Login Time, Time
    Plus a primary key.

    To lock a record you first check the existing record in the Session Table. If exactly the same record exists (ie all four fields match) the resource is locked. If the session is your session it's locked by you. Otherwise you can lock it yourself by copying the fields from your session record.

    For the normal usage (a NAS that restarts or dies when it has an error) this is enough. For a NAV client you need an addition to trap the case where the job dies but the session doesn't.

    Probably the best way to do this is to put a CONSISTENT(FALSE) against the lock table while it's locked, this way any commit will fail and rollback the lock. But if it were that easy you wouldn't need a manual lock.

    If you need to do a commit but keep the lock you can set another flag to say that the data is currently inconsistent but the user who has the lock is building it. Ie set the flag, CONSISTENT(TRUE), commit, CONSISTENT(FALSE), clear the flag. The only problem with that is that a user who has the lock will keep it until their session is killed. (Perhaps by an idle timeout?)

    To do the shared read locks you just use the normal technique of grabbing the exclusive lock while you add or check the list of shared locks; you have to check for dead sessions just like the exclusive lock. You need to do the CONSISTENT thing with the read locks too so if you must commit you have to clear your lock while running the commit and add yet another field somewhere to give the data a generation number. Whenever the data is updated the generation number is incremented, if your recreation of the read lock spots a new generation number you must abort or face inconsistent data.

    All in all manual locking is a pain and should be avoided if at all possible!

    The simplest way of avoiding this is the CONSISTENT(false) operation, that way you can prevent the commit of inconsistent data and this should be sufficient on the Native database because of the "versioning".
  • krikikriki Member, Moderator Posts: 9,110
    Do you need minutes because there are so many records to import or because there is a lot of processing to do?
    My idea would be to import and process the data and then write it to a temptable WITHOUT locking anything.
    Once this is finished, you can lock the table, throw away all records and flush the temptable in the real table.

    But this is only useful when it is NOT the number of records that makes it so slow.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


Sign In or Register to comment.