Better for SQL ???

nvermanverma Member Posts: 396
We created a dataport to import some data into Journal Journal Lines. However, we are having major performance issues. To import about 50k lines, it takes about 2-3 hours. Our client recently bought a top of the line new server.

I have made sure that we are using the correct keys, got rid of unnecessary validate calls.
One of the fields that gets validated is Posting Date, which in turn calls Quantity, which calls a whole bunch of codeunits. In one of the function calls, I noticed that there were lots of IF statements like the syntax below
For example
IF XXXX THEN BEGIN
IF YYYY THEN BEGIN
IF ZZZ THEN BEGIN
Message(....)

If I were to change this to
IF xxxx AND YYYY AND ZZZ THEN
Would this make a major difference in performance?

Answers

  • ara3nara3n Member Posts: 9,256
    I don't think it will make a difference.

    I think something you can look at is how many times the quantity field is validated.

    It should be validated only once.

    But as you mentioned, Posting validation validates quantity, and i'm sure you are validating quantity as well

    And there might be other fields that validation quantity as well. So try to find how many times each field or the fields that take the longest get validated and make sure they only are run once per record.
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
  • nvermanverma Member Posts: 396
    I looked through the code and Quantity is one of the Dataport Fields, so it gets validated once from there.

    In my Code, I validate 'Posting Date' which validates:
    VALIDATE("Document Date","Posting Date");
    IF "Currency Code" <> '' THEN BEGIN
    UpdateCurrencyFactor;
    UpdateAllAmounts;
    END
    & 'Unit of measure Code'
    GetGLSetup;
    CASE Type OF
    Type::Item:
    BEGIN
    Item.GET("No.");
    "Qty. per Unit of Measure" :=
    UOMMgt.GetQtyPerUnitOfMeasure(Item,"Unit of Measure Code");
    END;
    Type::Resource:
    BEGIN
    IF CurrFieldNo <> FIELDNO("Work Type Code") THEN
    IF "Work Type Code" <> '' THEN BEGIN
    WorkType.GET("Work Type Code");
    IF WorkType."Unit of Measure Code" <> '' THEN
    TESTFIELD("Unit of Measure Code",WorkType."Unit of Measure Code");
    END ELSE TESTFIELD("Work Type Code",'');
    IF "Unit of Measure Code" = '' THEN BEGIN
    Resource.GET("No.");
    "Unit of Measure Code" := Resource."Base Unit of Measure";
    END;
    ResUnitofMeasure.GET("No.","Unit of Measure Code");
    "Qty. per Unit of Measure" := ResUnitofMeasure."Qty. per Unit of Measure";
    "Quantity (Base)" := Quantity * "Qty. per Unit of Measure";
    END;
    Type::"G/L Account":
    BEGIN
    "Qty. per Unit of Measure" := 1;
    END;
    END;
    VALIDATE(Quantity);
    I cannot just remove the validate on these fields since they validate other fields aswell.

    So how can I get past this problem?
  • FDickschatFDickschat Member Posts: 380
    A dataport is about 20 times slower as if you program the same with a report and use simple File.Open, File.Read functions. Alternatively you could also use instream which has the same speed.

    We measured this with a simple import of 1.000.000 records into the item ledger entries (all fields) with a dataport and with a report using FILE.functions. No validation of fields of course.

    So if you do some validations you will not have an advantage of 1:20 but it will still be huge.

    You could also split the task in 2 parts to measure what really eats up the time:
    First import the data without validating it (into a staging table)
    As a second step create your journal (from the staging table).
    Frank Dickschat
    FD Consulting
  • jlesagejlesage Member Posts: 3
    i had a similar problem.
    i splitted my files in smaller ones.
    Ex. 5 files of 10K instead of one 50K.
    It was a lot faster.
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    How exactly would you do that? Suppose you have a CSV file, if you want to read every line into a variable you hit the 1000 char limit. Then you have to sanitize the data, like deal with field separators being used in the data etc. etc.

    I have seen some wacky solutions like use a stream, read 1000 chars in advance, then split it up, but it failed spectacularly when the 1000th char is exactly a separator. I actually rewrote an intracompany data exchange (item ledger entries and suchlike for analysis purposes) that imports about half a gig every night in an XMLport, because it was entirely unpredictable.

    I simply don't see how it is possible reliably within the limitations of NAV variables.
  • FDickschatFDickschat Member Posts: 380
    It works more or less as you say:
    We have built ourselves 2 nice little Codeunits exposing read and write functions and data conversion functions to the system. Writing is simple even if the length exceeds 1024, reading is the problem.
    We created 2 functions, one for up to 1024 characters, one for variable lengths. The variable length function reads 1 char at a time and checks what it is, which is of course slower than reading a 1024 string in 1 go but still way faster than a dataport. If csv format it already splits contents into separate fields going back via an array.
    The read1024charIn1Go Function feeds the record into the same function as the variable read function and then returns the split fields.

    The solution is in place since 2006 and has never failed on us. It can even import files with different decimal separators (. or ,) or 1000characters (. or , or space) and will find out itself what it is correct.

    I didn't say anything about XML ports. They also provide good performance only at that time (2006/7) we didn't care to use them.

    What was your problem with the separator at the last position exactly?
    Frank Dickschat
    FD Consulting
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    I don't remember exactly, it was a "all our group level reports are wrong, do something, even yesterday is too late, find a time machine" kind of thing. I figured out somehow that it was those lines wrongly imported that had a separator exactly there, looked into the code, turned out I can't really read it is just looked convoluted and very complex to me. It was that kind of stuff when STRPOS is used 20 times in 10 lines and my eyes go went yo-yo from it. It was just faster to make some XMLPorts and pretend it's the 21st century already :)
  • nvermanverma Member Posts: 396
    I was able to make it run alot faster, by controlling the no. of times Quantity got validated.

    Thanks. :D
    ara3n wrote:
    I don't think it will make a difference.

    I think something you can look at is how many times the quantity field is validated.

    It should be validated only once.

    But as you mentioned, Posting validation validates quantity, and i'm sure you are validating quantity as well

    And there might be other fields that validation quantity as well. So try to find how many times each field or the fields that take the longest get validated and make sure they only are run once per record.
  • ara3nara3n Member Posts: 9,256
    You are welcome.
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
Sign In or Register to comment.