Upgrade 2.5/2.6 TO 4.0

ondinaondina Member Posts: 9
Hi

I need some help.

I have a DB in 2.5/2.6 and I upgrade the objects to 4.0

Now I need to move data from old database to the new database.

I need to move 20 000 000 of records that are in table 17.
So, I created a temp table just like table 17 in the new database, then with a DTS I put some record in the temp table, then whit a codeunit I went to get the records of the temp table and inserted the records in the table 17.
I spend 5 days to insert 4 000 000 of record in the new database and it is still running.

What I want to know is if there is anything that I can do to make the process run faster.

Thanks,

Dina

Comments

  • 2tje2tje Member Posts: 80
    If you are upgrading the Navision way, then you (almost) don't have to transfer any data. You open the 2.6 db in a 4.0 client and import the 4.0 objects in this db.
    On the installation cd there should be a dir named \UPGTK\Doc with info about additional tasks to eg. fill dimension tables and other new stuff in 4.0
  • Marije_BrummelMarije_Brummel Member, Moderators Design Patterns Posts: 4,262
  • ondinaondina Member Posts: 9
    Hi,

    No, I'm not upgrading the Navision way. I had talk ed with some one in navision, and for this especial case it was said to not do it.
  • 2tje2tje Member Posts: 80
    Maybe you can try to disable all secondary keys. And enable them after the import.
  • DenSterDenSter Member Posts: 8,307
    It's an all out BAD idea to use DTS for your upgrade, that spells all kinds of disaster, especially if you're doing it for your GL Entries. If someone at Microsoft is suggesting this then I guess they know what they are talking about, but you will have to get them directly involved onsite.

    What I can think of though is that you may have to streamline your code a bit, put some COMMIT statements here and there to free up memory. Are you running the process on the server directly or on a client machine? If you are running it on a client, then you are pulling 20000000 records through the network to your client computer (I doubt that you have enough RAM for that, so it's probably paging like crazy) and pushing 20000000 other ones back to the server.
  • ondinaondina Member Posts: 9
    What I doing is this:

    The DTS put some (note all the 20000000 records) in table Temp17 (a table equal to table 17)

    Then I run a codeunit:

    Temp17.RESET
    IF Temp17.FIND('-') THEN
    REPEAT
    T17 := Temp17;
    T17.INSERT(TRUE);
    COMMIT;
    UNTIL Temp17.NEXT = 0;

    When finish, I TRUNC the Temp17 in SQL to clean all the records, and then I put more record (4000000) with the DTS and run the codeunit again.


    I going to try disable the secondary key and put less records to see if it improve performance.
  • DenSterDenSter Member Posts: 8,307
    Are you running that process from your own computer or on the server directly?

    You may want to code commit points around that, so you don't COMMIT after every insert but after say every 500 or every 1000 records. COMMIT itself takes up processing power itself, and by only doing it every so many records you can streamline the process.

    Disabling secondary kays may speed things up as well. Don't forget to turn them back on when you are done though :).
  • ondinaondina Member Posts: 9
    Thanks

    Disabling secondary key work perfectly. One week to insert 4.000.000 records, 1 hour to insert 3.000.000 .
    Let see what happen when I turn them back again. :D
  • 2tje2tje Member Posts: 80
    Turning them back on shouldn't take that much time. It is creating the keys only once for all records, instead updating them for every record when importing. Tip: do the same when deleting a lot of records.
  • ondinaondina Member Posts: 9
    I turn back again keys in table 17.
    Now I can't insert a record in table 17. It stop, I stay half an hour to insert a record. What is happen????????

    ](*,) ](*,) ](*,)
Sign In or Register to comment.