Custom interface skipping over records for no apparent reaso

Miklos_Hollender
Member Posts: 1,598
Before I joined, a consulting company developed many interfaces for importing data from large 10-30MB text files automatically through the night, an in order to avoid having it tripped up by having CR/LFs in the data they are using special field and report separators like ~+~ , so far it is clever. This is not a dataport but a codeunit, and often it skips some records that are in the file, does not load them. I cannot understand why: if I test it with the whole file, it skips that record, however if I say make a smaller file that has only this record plus 10 before and 10 after then it is OK. That's a really strange error.
Alternative solution would be rewriting it in a simpler way, however I am not that familiar with streams and not sure you could have it simpler and more debuggable?
To me this is rather confusing coding and any suggestions how to rewrite it in a simpler way while still keeping it in a codeunit and not dataports (AppServer...) and still using these ~+~ - like separators, or alternatively, what could cause it to skip a record that is clearly in the file? It does not even evaluate the first field of it, so any suggestions would be warmly welcome.
Alternative solution would be rewriting it in a simpler way, however I am not that familiar with streams and not sure you could have it simpler and more debuggable?
ImportFile.CREATEINSTREAM(IStream); WHILE NOT IStream.EOS OR (IText <> '') DO BEGIN IF NOT IStream.EOS THEN BEGIN FieldValue := IText; IStream.READTEXT(IText,1000-STRLEN(IText)); IText := FieldValue + IText; END; WHILE (STRPOS(IText,FieldSeperator) <> 0) OR ((IText <> '') AND IStream.EOS)DO BEGIN FieldValue := ''; IF STRPOS(IText,FieldSeperator) > 1 THEN FieldValue := COPYSTR(IText,1,STRPOS(IText,FieldSeperator)-1); IText := COPYSTR(IText,STRPOS(IText,FieldSeperator)+STRLEN(FieldSeperator)); FieldValue := DELCHR(FieldValue,'=',FillChar); CASE FieldCounter OF 1: EVALUATE(Table.Field1, FieldValue); 2: EVALUATE(Table.Field2,FieldValue); (... this fifty times then) END; FieldCounter += 1; IF ((STRPOS(IText,RecordSeperator) < STRPOS(IText,FieldSeperator)) AND (STRPOS(IText,RecordSeperator) <> 0))OR ((STRPOS(IText,FieldSeperator) = 0) AND (STRPOS(IText,RecordSeperator) <> 0)) THEN BEGIN FieldValue := ''; IF STRPOS(IText,RecordSeperator) > 1 THEN FieldValue := COPYSTR(IText,1,STRPOS(IText,RecordSeperator)-1); IText := COPYSTR(IText,STRPOS(IText,RecordSeperator)+STRLEN(RecordSeperator)); EVALUATE(Table.LastField,FieldValue); FieldCounter := 1; Table.INSERT; END; END; FieldValue := IText; END; ImportFile.CLOSE;
To me this is rather confusing coding and any suggestions how to rewrite it in a simpler way while still keeping it in a codeunit and not dataports (AppServer...) and still using these ~+~ - like separators, or alternatively, what could cause it to skip a record that is clearly in the file? It does not even evaluate the first field of it, so any suggestions would be warmly welcome.
0
Comments
-
Don't waste time with such "salto mortales" coding. Create a dataport and use this codeunit to call it - simple enough?0
-
I'm guessing when the itext has a length exceeding 1000 it'll behave irraticly.
Find out if it is the absolute position of the record in the file:
remove the first record from the file, see if it skips the next record instead of the same.
But a more normalised way of importing the file would be the better choice.0 -
@rht no dataports for appserver. This must be a 100% automatic synchronisation every night. On native database so no SQL synch either. And given that it mostly works, why rewrite? I just want to fix it.
@sog this is a 30MB file in one line (recordseparator is like ~+~ not newline) so of course it iis always over 1000.
@everybody another idea. I have seen many interfaces in my life and if there was either a program error or an error in data we had an error message, never a silent skipping of records. I am suspecting more of a technical error, which is harder to solve, but has to be, very important reports depend on it.
Could it happen in a native database that when importing like 80,000 records some cache like commit cache fills up and skips something? Or memory or something?0 -
Miklos Hollender wrote:@rht no dataports for appserver. This must be a 100% automatic synchronisation every night. On native database so no SQL synch either. And given that it mostly works, why rewrite? I just want to fix it.0
-
Why not use the debugger? Set a conditional break point to stop one record before the bad one and debug from there on.David Singleton0
-
Miklos Hollender wrote:@rht no dataports for appserver. This must be a 100% automatic synchronisation every night. On native database so no SQL synch either. And given that it mostly works, why rewrite? I just want to fix it.
@sog this is a 30MB file in one line (recordseparator is like ~+~ not newline) so of course it iis always over 1000.IStream.READTEXT(IText,1000-STRLEN(IText));
If itext exceeds 1000 at any point.You'll get odd behaviour.
What kind of behaviour, I don't know, I'm not the one who has the codeunit.0 -
@David Singleton conditional breakpoints in a native debugger, not Visual Studio? (I am still on native database, classic cluent due to character set problems with SQL and general organizational inertia). I tried conditional codecoveragelog and turn on the record before, turn off the record after, it showed absolutely no code running.
Again, my general instinct tells me if there is a program or data error in an interface there is usually an error message. Can such a silent skip be the result of something else, commit cache filling up, memory filling up and suchlike?0 -
I have been using conditional break points in Navision since the DOS version.David Singleton0
-
@Sog good I am not sure how I can do this remove record, go to next record, because of a file of 50000 records it is the 38000 the first one skipped. Now the point is, this has something to do with the size of the file, because if I create a file say 37900 - 38100 it works perfectly. So if I would the 1-37999 it would probably work. This is why I am guessing at a not programmatic problem but something simply filling up, some cache or something?... If programatically turn on the codecoveragelog before this record and turn off after it it says absolutely no code is running.0
-
@David thanks how? After an IF-THEN-ERROR I cannot make it go on in the debugger, can I? And the F9-breakpoints are not conditional, are they? Insert a record in the Breakpoint table?0
-
Miklos Hollender wrote:@David thanks how? After an IF-THEN-ERROR I cannot make it go on in the debugger, can I? And the F9-breakpoints are not conditional, are they? Insert a record in the Breakpoint table?
I assume you have some counter for each record, and you know which record is the bad one, so just add code likeIf RecordCounter >= 37999 then a := a;
and put a break point on the a := a line, the debugger will stop when recordcounter = 37999.
{edit typo no -> know}David Singleton0 -
Miklos Hollender wrote:Could it happen in a native database that when importing like 80,000 records some cache like commit cache fills up and skips something? Or memory or something?
From my point of view the only system related thing I could think of is that there's a special character within that record that leads to different system behaviour. Maybe it isn't even visible.
Do you use any condition that would skip records? E.g. if the record (or at least it's pk field values) appears twice in one file?
I would go that debugger way as well. You could do something like that:IF Record = RecordBeforeSkipped THEN RecBefore := TRUE; // set Breakpoint here
And from that point go on in single steps."Money is likewise the greatest chance and the greatest scourge of mankind."0 -
einsTeIn.NET wrote:From my point of view the only system related thing I could think of is that there's a special character within that record that leads to different system behaviour. Maybe it isn't even visible.
I was just about to post the exact same thing. :thumbsup:David Singleton0 -
David Singleton wrote:einsTeIn.NET wrote:From my point of view the only system related thing I could think of is that there's a special character within that record that leads to different system behaviour. Maybe it isn't even visible.
I was just about to post the exact same thing. :thumbsup:"Money is likewise the greatest chance and the greatest scourge of mankind."0 -
@David @Einstein thanks, simple idea, never occurred to me
BTW I found out it is not commit cache (now I tried committing every 10000 records, no change) and no, dataports are not an option, they will not work on NAS even if called through code, simply variables of a dataport type are not allowed.0 -
Thanks, conditional debugging took me one step closer, it seems to be a program error somewhere, it seems when the reading forward of 1000 chars into Itext just happens to stop right before a recordseparator it does not go into the last condition which would insert the record, it keeps on reading forward.
To be honest not sure how to resolve it, the algorithm is just too complicated.
Any ideas to rewrite it in simpler ways? Recordseparators are 3 character long so reading byte by byte would not work. Try using a single-character recordseparator that hopefully does not occur in the data like § and rewrite it to read byte by byte?0 -
Miklos Hollender wrote:...and no, dataports are not an option, they will not work on NAS even if called through code, simply variables of a dataport type are not allowed.0
-
rhpnt wrote:Miklos Hollender wrote:...and no, dataports are not an option, they will not work on NAS even if called through code, simply variables of a dataport type are not allowed.
No they are very very different. In fact my recommendation would be to convert to XML ports if that is some how possible.
I was also very curious how you managed to run a dataport with NAS.David Singleton0 -
The big difference is that plain simply the dataport variable is not allowed, results in error message in error log, and XMLPorts with non-XML files you can only use with the RTC.
BTW my problem just gets messier and messier. Basically if the 1000 characters read ends so that there is no recordseparator in it, the whole logic goes out of the window. I don't know this logic can be fixed at all, the whole thing is just wrong - what to do if there is a numeric field €50000 and the 1000 character read forward happens to end at €5000? And so on. I think this just should not be done this way but have no idea how to explained that to management, this resulted in fairly reliable reports for a half a year and they are important ones, it is basically about synching stock etc. from subsidiaries to HQ for reporting.
Does any of you see any way of fixing it?
If not what would be the safest way to rewrite it? Make sure 1024 long records are enough, DELCHR CR/LF's from the export side, and read it as a normal text file? Implement a fixed-width format? XMLPort? Theoretically possible, except that I am not sure if it fixes that f.e. when we have decimal points in one country and decimal commas in another etc.0 -
David Singleton wrote:No they are very very different. In fact my recommendation would be to convert to XML ports if that is some how possible.I was also very curious how you managed to run a dataport with NAS.0
-
I think the best solution is to find a program that will automatically convert the text file to an XML file and then use an XML port. You would call a script to convert the text file just before the XML port runs.David Singleton0
-
By the way, I hate dataports, so removing them in 2009 made me very happy.David Singleton0
-
Well given that the source of the data is another NAV database in a subsidiary I can just replace the exporting codeunit with an XMLport if it works reliably. Currently testing. My worry is that it could take a while and no group reports until that would be a problem, I still wonder if at least a temporary fix can be found to this "read forward 1000 chars" thing, or is it irredeemable...0
Categories
- All Categories
- 73 General
- 73 Announcements
- 66.6K Microsoft Dynamics NAV
- 18.7K NAV Three Tier
- 38.4K NAV/Navision Classic Client
- 3.6K Navision Attain
- 2.4K Navision Financials
- 116 Navision DOS
- 851 Navision e-Commerce
- 1K NAV Tips & Tricks
- 772 NAV Dutch speaking only
- 617 NAV Courses, Exams & Certification
- 2K Microsoft Dynamics-Other
- 1.5K Dynamics AX
- 320 Dynamics CRM
- 111 Dynamics GP
- 10 Dynamics SL
- 1.5K Other
- 990 SQL General
- 383 SQL Performance
- 34 SQL Tips & Tricks
- 35 Design Patterns (General & Best Practices)
- 1 Architectural Patterns
- 10 Design Patterns
- 5 Implementation Patterns
- 53 3rd Party Products, Services & Events
- 1.6K General
- 1.1K General Chat
- 1.6K Website
- 83 Testing
- 1.2K Download section
- 23 How Tos section
- 252 Feedback
- 12 NAV TechDays 2013 Sessions
- 13 NAV TechDays 2012 Sessions