Options

NAV2013R2 import options for large amounts of data

spider1269spider1269 Member Posts: 68
edited 2015-01-06 in NAV Three Tier
Here's my situation. I've tried upgrading a 5.0SP1 database to NAV2013R2 but the conversion fails due to the size of some of the tables. I've discussed this in other posts I've made so I won't go into detail about that here. Also, I've learned in the last few weeks that I'm limited to downtime of 12 hours by upper management and there's no way that the conversion would finish in that amount of time.

Option 2 is to start with a clean database and re-implement. There are many reasons that I would prefer to do this anyway. The main one is this could be done in 12 hours since we could have most of the database ready to go and import PTD data to get everything in synch. The issue I'm looking at is that upper management would like 2 years of history and how to import this into the database. I'm looking at options to import large amounts of data. 2 years of history for us ends up being 100's of millions of records once I look at G/L transactions, inventory transactions, Sales Invoices, Purchase Invoices, etc. Of course, posting the transactions once they're in NAV is another issue (100M+ G/L transactions alone)! :shock:

I've looked at text file imports but there is a 2GB limit on text files so that won't be an option (I estimated it would take 125 files just to import in G/L transactions). I'm trying an Xml import but I'm wondering if there is a better way to do this when dealing with large amounts of data. Time won't be an issue with this since we can import the history ahead of going live.

I already have xmlports for all the master data (Customers, vendors, items, etc.) so those aren't an issue since we have only 100,000's of those records.

Comments

  • Options
    davmac1davmac1 Member Posts: 1,283
    Microsoft uses SQL statements to speed up the dimension upgrade.
    Have you looked to see if any of the upgrade processes can be replaced with SQL statements?
    If any of the slow upgrade processes are modifying or inserting keys on an individual record basis, you could disable the keys (except for the primary) and then re=enable them when the upgrade is finished.

    I saw one installation where G/L ledger entries were loaded directly using SQL, which is much faster, but creates a mess. I would not recommend this method.
    But you could use individual field updates in some cases.
  • Options
    spider1269spider1269 Member Posts: 68
    David,

    We've tried disabling all the keys (except the primary), adding RAM and running on SSD's. We always fail at the Ledger Entry Dimension table which has over 300M records. Microsoft said there wasn't much that could be done except for the conversion to complete.

    I'm looking at importing G/L entries into the SQL journal tables directly and then posting from there. Given the large number of records I'm seeing it as my only option at this point.
  • Options
    bbrownbbrown Member Posts: 3,268
    What sort of failure did you have? Any error messages?

    Your problem has grabbed my interest as I will have a similar situation in the coming months. The database currently has over 1.9 Billion (yes with a B) entries in the Ledger Entry Dimension table.
    There are no bugs - only undocumented features.
  • Options
    ppavukppavuk Member Posts: 334
    1.9 Billion? Mate, i just leave this comment to subscribe to this.
  • Options
    spider1269spider1269 Member Posts: 68
    We're getting a "locked by another user" error. Even though the database is in single user mode and the only user logged in is Administrator. We've tried running a trace but the process fails while it's inserting records into the temp Ledger Entry Dimension table and shows nothing of another Admin instance accessing the database. It's been very frustrating to say the least.

    I've read about several other instances of the same error with large databases and from what I've seen there isn't a resolution yet.

    NOTE: If we delete most of the records in the Ledger Entry Dimension table (leaving 10-20M) it then completes successfully.
  • Options
    bbrownbbrown Member Posts: 3,268
    That 1.9 billion works out to an average of about 3 per ledger entry. The most any entry has is 5. So no this is not over-dimensioned. The new approach of dimensions sets should be a big help. As there are probably around 50 unique combinations in the mix.

    Unfortunately I've not done one of these conversions, So I don't have any advice to you. But it sort of sounds like your database is not really in single user mode.
    There are no bugs - only undocumented features.
  • Options
    ppavukppavuk Member Posts: 334
    NOTE: If we delete most of the records in the Ledger Entry Dimension table (leaving 10-20M) it then completes successfully.

    If you just comment out dimension conversion code and complete the upgrade, and then run dimension conversion by chunks say of 1M records or so? Of course, you have to write own process based on standard tool, but it may resolve your issue.
  • Options
    jglathejglathe Member Posts: 639
    Hi there,

    sounds interesting and like a bug... probably race condition or deadlock. I had strange effects while doing some testing with a table that had a GUID as primary key. I fired up several instances of the client (from different PCs) that all inserted new records into the table at top speed. Before inserting the GUID was checked if it was unique with a flowfield on the same table. Commit was after 100000 or so (later 10000) records were inserted. This ran for quite a while, but after half an hour or so all of the clients terminated one after another (the code was "if not record.insert then exit" or something like it). These could have been GUID collisions (unlikely, and I didn't find one, but that's what the test was about) or some other hidden issue in NAV. Also, only one client doing this died after a few 100000 new records, too. The intervals shortened the more records where in the table.
    Results: No fails below around 100M records, ramp up to 1 every 20000 records at around 140M records.
    Now a GUID might be an expensive primary key, so it might produce issues with lower record counts than usual. But it appears to be an issue.
    With that in mind, did you try to make the insert fail-safer? For example like this:

    i := 0;
    while (not rec.insert) and (i < maxretries) do begin
    i := i + 1;
    commit; // if allowable, but hey, you're alone and there is nothing to lose, right?
    sleep(1); // alternatively
    end;

    If this is an issue that is buried somewhere in the database access logic and that only occurs when doing these large inserts, it might help. At least it looks like it's worth a try.

    with best regards

    Jens
  • Options
    krikikriki Member, Moderator Posts: 9,090
    ppavuk wrote:
    NOTE: If we delete most of the records in the Ledger Entry Dimension table (leaving 10-20M) it then completes successfully.

    If you just comment out dimension conversion code and complete the upgrade, and then run dimension conversion by chunks say of 1M records or so? Of course, you have to write own process based on standard tool, but it may resolve your issue.
    I would do something like this.

    Or already start doing it BEFORE the upgrade:
    You can already add an extra field to the G/L Entries and start to convert the older G/L Entries to the new dimension-system. You can write your own procedure for it because it is a lot easier when you already know how much and which dimensions you use. This is something you can already do to save time during the upgrade.
    When doing the real conversion, you can create the dimension-combinations you already have and let convert the remaining tables using those combinations.
    The G/L Entries you already converted, you must handle yourself. With a TSQL command, you can update the real "Dimension Set ID" field with your new field in which you already calculated the Dimension Set to use.

    The table you can create could be something like this:
    "Dimension Set ID" : int (primary key)
    "Dimension 1"
    "Dimension 2"
    "Dimension 3"
    "Dimension 4"
    "Dimension 5"
    "Dimension 6"
    "Dimension 7"
    "Dimension 8"
    "Dimension ABC"
    "Dimension DEF"
    "Dimension GHI"
    ...
    With an extra index on "Dimension 1,Dimension 2,.........."

    This structure makes it easy and fast to find an existing record for a dimension-set in your G/L Entry (you can include the whole thing also for other tables to gain some speed during the upgrade). Doing everything before the real upgrade, you can completely skip the dimension-upgrade of standard NAV and do your own that is faster because it is NOT generic and already calculated).
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    davmac1davmac1 Member Posts: 1,283
    2015 seems to have some performance improvements on the conversion and has the feature of allowing synchronous or asynchronous.

    also, since the dimension upgrade uses sql, have you tried ssd for the log file and maximizing RAM?
Sign In or Register to comment.