Better for SQL ???

nverma
Member Posts: 396
We created a dataport to import some data into Journal Journal Lines. However, we are having major performance issues. To import about 50k lines, it takes about 2-3 hours. Our client recently bought a top of the line new server.
I have made sure that we are using the correct keys, got rid of unnecessary validate calls.
One of the fields that gets validated is Posting Date, which in turn calls Quantity, which calls a whole bunch of codeunits. In one of the function calls, I noticed that there were lots of IF statements like the syntax below
If I were to change this to
I have made sure that we are using the correct keys, got rid of unnecessary validate calls.
One of the fields that gets validated is Posting Date, which in turn calls Quantity, which calls a whole bunch of codeunits. In one of the function calls, I noticed that there were lots of IF statements like the syntax below
For example
IF XXXX THEN BEGIN
IF YYYY THEN BEGIN
IF ZZZ THEN BEGIN
Message(....)
If I were to change this to
Would this make a major difference in performance?IF xxxx AND YYYY AND ZZZ THEN
0
Answers
-
I don't think it will make a difference.
I think something you can look at is how many times the quantity field is validated.
It should be validated only once.
But as you mentioned, Posting validation validates quantity, and i'm sure you are validating quantity as well
And there might be other fields that validation quantity as well. So try to find how many times each field or the fields that take the longest get validated and make sure they only are run once per record.0 -
I looked through the code and Quantity is one of the Dataport Fields, so it gets validated once from there.
In my Code, I validate 'Posting Date' which validates:VALIDATE("Document Date","Posting Date");
IF "Currency Code" <> '' THEN BEGIN
UpdateCurrencyFactor;
UpdateAllAmounts;
ENDGetGLSetup;
CASE Type OF
Type::Item:
BEGIN
Item.GET("No.");
"Qty. per Unit of Measure" :=
UOMMgt.GetQtyPerUnitOfMeasure(Item,"Unit of Measure Code");
END;
Type::Resource:
BEGIN
IF CurrFieldNo <> FIELDNO("Work Type Code") THEN
IF "Work Type Code" <> '' THEN BEGIN
WorkType.GET("Work Type Code");
IF WorkType."Unit of Measure Code" <> '' THEN
TESTFIELD("Unit of Measure Code",WorkType."Unit of Measure Code");
END ELSE TESTFIELD("Work Type Code",'');
IF "Unit of Measure Code" = '' THEN BEGIN
Resource.GET("No.");
"Unit of Measure Code" := Resource."Base Unit of Measure";
END;
ResUnitofMeasure.GET("No.","Unit of Measure Code");
"Qty. per Unit of Measure" := ResUnitofMeasure."Qty. per Unit of Measure";
"Quantity (Base)" := Quantity * "Qty. per Unit of Measure";
END;
Type::"G/L Account":
BEGIN
"Qty. per Unit of Measure" := 1;
END;
END;
VALIDATE(Quantity);
So how can I get past this problem?0 -
A dataport is about 20 times slower as if you program the same with a report and use simple File.Open, File.Read functions. Alternatively you could also use instream which has the same speed.
We measured this with a simple import of 1.000.000 records into the item ledger entries (all fields) with a dataport and with a report using FILE.functions. No validation of fields of course.
So if you do some validations you will not have an advantage of 1:20 but it will still be huge.
You could also split the task in 2 parts to measure what really eats up the time:
First import the data without validating it (into a staging table)
As a second step create your journal (from the staging table).Frank Dickschat
FD Consulting0 -
i had a similar problem.
i splitted my files in smaller ones.
Ex. 5 files of 10K instead of one 50K.
It was a lot faster.0 -
How exactly would you do that? Suppose you have a CSV file, if you want to read every line into a variable you hit the 1000 char limit. Then you have to sanitize the data, like deal with field separators being used in the data etc. etc.
I have seen some wacky solutions like use a stream, read 1000 chars in advance, then split it up, but it failed spectacularly when the 1000th char is exactly a separator. I actually rewrote an intracompany data exchange (item ledger entries and suchlike for analysis purposes) that imports about half a gig every night in an XMLport, because it was entirely unpredictable.
I simply don't see how it is possible reliably within the limitations of NAV variables.0 -
It works more or less as you say:
We have built ourselves 2 nice little Codeunits exposing read and write functions and data conversion functions to the system. Writing is simple even if the length exceeds 1024, reading is the problem.
We created 2 functions, one for up to 1024 characters, one for variable lengths. The variable length function reads 1 char at a time and checks what it is, which is of course slower than reading a 1024 string in 1 go but still way faster than a dataport. If csv format it already splits contents into separate fields going back via an array.
The read1024charIn1Go Function feeds the record into the same function as the variable read function and then returns the split fields.
The solution is in place since 2006 and has never failed on us. It can even import files with different decimal separators (. or ,) or 1000characters (. or , or space) and will find out itself what it is correct.
I didn't say anything about XML ports. They also provide good performance only at that time (2006/7) we didn't care to use them.
What was your problem with the separator at the last position exactly?Frank Dickschat
FD Consulting0 -
I don't remember exactly, it was a "all our group level reports are wrong, do something, even yesterday is too late, find a time machine" kind of thing. I figured out somehow that it was those lines wrongly imported that had a separator exactly there, looked into the code, turned out I can't really read it is just looked convoluted and very complex to me. It was that kind of stuff when STRPOS is used 20 times in 10 lines and my eyes go went yo-yo from it. It was just faster to make some XMLPorts and pretend it's the 21st century already0
-
I was able to make it run alot faster, by controlling the no. of times Quantity got validated.
Thanks.ara3n wrote:I don't think it will make a difference.
I think something you can look at is how many times the quantity field is validated.
It should be validated only once.
But as you mentioned, Posting validation validates quantity, and i'm sure you are validating quantity as well
And there might be other fields that validation quantity as well. So try to find how many times each field or the fields that take the longest get validated and make sure they only are run once per record.0 -
You are welcome.0
Categories
- All Categories
- 73 General
- 73 Announcements
- 66.6K Microsoft Dynamics NAV
- 18.7K NAV Three Tier
- 38.4K NAV/Navision Classic Client
- 3.6K Navision Attain
- 2.4K Navision Financials
- 116 Navision DOS
- 851 Navision e-Commerce
- 1K NAV Tips & Tricks
- 772 NAV Dutch speaking only
- 617 NAV Courses, Exams & Certification
- 2K Microsoft Dynamics-Other
- 1.5K Dynamics AX
- 320 Dynamics CRM
- 111 Dynamics GP
- 10 Dynamics SL
- 1.5K Other
- 990 SQL General
- 383 SQL Performance
- 34 SQL Tips & Tricks
- 35 Design Patterns (General & Best Practices)
- 1 Architectural Patterns
- 10 Design Patterns
- 5 Implementation Patterns
- 53 3rd Party Products, Services & Events
- 1.6K General
- 1.1K General Chat
- 1.6K Website
- 83 Testing
- 1.2K Download section
- 23 How Tos section
- 252 Feedback
- 12 NAV TechDays 2013 Sessions
- 13 NAV TechDays 2012 Sessions