We created a dataport to import some data into Journal Journal Lines. However, we are having major performance issues. To import about 50k lines, it takes about 2-3 hours. Our client recently bought a top of the line new server.
I have made sure that we are using the correct keys, got rid of unnecessary validate calls.
One of the fields that gets validated is Posting Date, which in turn calls Quantity, which calls a whole bunch of codeunits. In one of the function calls, I noticed that there were lots of IF statements like the syntax below
For example
IF XXXX THEN BEGIN
IF YYYY THEN BEGIN
IF ZZZ THEN BEGIN
Message(....)
If I were to change this to
IF xxxx AND YYYY AND ZZZ THEN
Would this make a major difference in performance?
Answers
I think something you can look at is how many times the quantity field is validated.
It should be validated only once.
But as you mentioned, Posting validation validates quantity, and i'm sure you are validating quantity as well
And there might be other fields that validation quantity as well. So try to find how many times each field or the fields that take the longest get validated and make sure they only are run once per record.
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n
In my Code, I validate 'Posting Date' which validates: & 'Unit of measure Code' I cannot just remove the validate on these fields since they validate other fields aswell.
So how can I get past this problem?
We measured this with a simple import of 1.000.000 records into the item ledger entries (all fields) with a dataport and with a report using FILE.functions. No validation of fields of course.
So if you do some validations you will not have an advantage of 1:20 but it will still be huge.
You could also split the task in 2 parts to measure what really eats up the time:
First import the data without validating it (into a staging table)
As a second step create your journal (from the staging table).
FD Consulting
i splitted my files in smaller ones.
Ex. 5 files of 10K instead of one 50K.
It was a lot faster.
I have seen some wacky solutions like use a stream, read 1000 chars in advance, then split it up, but it failed spectacularly when the 1000th char is exactly a separator. I actually rewrote an intracompany data exchange (item ledger entries and suchlike for analysis purposes) that imports about half a gig every night in an XMLport, because it was entirely unpredictable.
I simply don't see how it is possible reliably within the limitations of NAV variables.
We have built ourselves 2 nice little Codeunits exposing read and write functions and data conversion functions to the system. Writing is simple even if the length exceeds 1024, reading is the problem.
We created 2 functions, one for up to 1024 characters, one for variable lengths. The variable length function reads 1 char at a time and checks what it is, which is of course slower than reading a 1024 string in 1 go but still way faster than a dataport. If csv format it already splits contents into separate fields going back via an array.
The read1024charIn1Go Function feeds the record into the same function as the variable read function and then returns the split fields.
The solution is in place since 2006 and has never failed on us. It can even import files with different decimal separators (. or ,) or 1000characters (. or , or space) and will find out itself what it is correct.
I didn't say anything about XML ports. They also provide good performance only at that time (2006/7) we didn't care to use them.
What was your problem with the separator at the last position exactly?
FD Consulting
Thanks.
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n