Lanham EDI - SDQ Performance?

jversusjjversusj Member Posts: 489
Hello again,
Our top trading partner has just changed their tune and they want an invoice per SDQ (from the one EDI PO they send). From earlier threads, I knew to look into SDQ Ship To. It worked great in some small samples, and then the problems started to roll in.

I found that the Lanham code used arrays with 1000 dimensions and a fixed max-iteration loop (j = 1000) during the InsertShipTo function during order creation. Our Customer sends us POs with 4000+ SDQs, so this was insufficient. I upped the array to 10000 and the max-iteration loop as well. This seemed to do the trick, but the result is extremely slow. While the process is running, the sales header is locked, so I have been testing this off hours. Last night I let a 3500 SDQ PO process for over six hours before I cancelled the process (because we have some other nightly routines that had to run) - all order creation rolled back. I have a hard time believing that is the intended result of the code.

Does anyone else using the SDQ Ship To functionality have benchmark numbers I can compare to? How long would it take your system to create 3500 sales orders?

I am trying to find where the issue stands - I am experimenting with how it assigns sales lines to sales headers thinking that the i+1 loop until the proper array value is found was inefficient (i'm testing a completely different approach). I also wonder if it is related to Sales Line VALIDATE calls? Failing that, perhaps it is our ancient version of the EDI code (3.7b code base).

The creation of the 3500 headers takes about 4 minutes. The lines take unknown Hours.
kind of fell into this...

Comments

  • Alex_ChowAlex_Chow Member Posts: 5,063
    You should contact Lanham with an issue like this. Or contact Per at Mergetools.com
  • jversusjjversusj Member Posts: 489
    Lanham does not respond to any questions I send them. It seems questions always have to pass through the VAR (even then, answers are slow) - since this is an exploratory question, I wanted to avoid official channels. I was hoping to get a feel for what is 'normal processing time' in the community.

    with my experimental changes, I got 3500 orders to create last night, but it took over 4.5 hours (running when i checked at 12:30 AM and done by 2:30AM when I checked again).

    i do not feel 3500 order should take that long to create.
    kind of fell into this...
  • SavatageSavatage Member Posts: 7,142
    jversusj wrote:
    i do not feel 3500 order should take that long to create.

    It does take long tho. We use Lanham & SDQ's but not on the scale of 3500+. we're around 200.

    But just as a dataport slows as it goes when importing orders - I believe there are posts about using commit with a counter. I remember reading long ago refering to importing large amounts of data. Other thoughts were break the file into several smaller files.

    I don't know if it's possible to add a commit somewhere in the EDI mod after say ever 200 - perhaps someone else can add more detail or provide directions.
  • jversusjjversusj Member Posts: 489
    thank you, sir.

    i do not know if a commit would work without other mods to the system. the orders are created by reading through the EDI rec. doc. fields table (start to finish). if there were an error, it would roll back to the last commit. as is, we would then start again and it would start creating again. you would have to find a way to mark edi rec doc field records as processed so that it would pick back up where it left off. the way the code continually loops through the rec doc field records, it take some effort to make sure you are only flagging records that can truly be skipped next time. you would also need some mechanism for keeping track of the sales headers that were created and appended to so that you could go back to them and add lines as necessary. if you read through the code, it creates all the sales headers first for every SDQ, keeping track of them in an Array. it then starts to add lines and finds the proper order number in the array - retrieving that order's lines and appending to it. you could make it 1/3 of the way through all the lines and stop. when you picked back, you would have to know about those orders (and which array entry they should have been).
    kind of fell into this...
  • genericgeneric Member Posts: 511
    I don't know if you want to hear this, but I suggest to look at Biztalk for high volume transactions.
    Or have another company created to load the EDI and then transfer them to actual company.

    I suggest to get the performance info from Lanham.
  • jversusjjversusj Member Posts: 489
    thanks - that's two votes for contactig Lanham... looks like i'll be sending something via my partner.
    kind of fell into this...
  • genericgeneric Member Posts: 511
    One more suggestion

    You mentioned that you are looping through array.

    Does every element in array have a value?

    You could use CompressArray

    and only loop till using ArrayLen and not till 10000.

    But I wouldn't know where the performance bottle neck is.
  • jversusjjversusj Member Posts: 489
    thanks - the array loops seem to have appropriate exit conditions to prevent unnecessary iterations. I made a change in one case to do away with an array loop altogether and instead look up a temporary record. It seemed to improve performance at least a little.
    kind of fell into this...
  • genericgeneric Member Posts: 511
    Use sql monitor to see where bottle neck is and see if you can improve the performance.
Sign In or Register to comment.