Aged Receivables Report 10040

since2002since2002 Member Posts: 25
The out of the box Aged Receivables Report 10040 takes almost 1.5 hours to run in both RTC and Classic version, aged as of TODAY, age by due date. We have 6900 customers with balances and 2.3 million customer ledger records. The 2.3 million records are about 2 years of history, most closed entries but obviously current open invoices and credits. The Simple Aging Report 109 runs in about 2 minutes but we still need to run the detail so 10040 is required. I am told we are running on a 98GB memory server with 2 2.136GHZ processors each with 16GB memory of core. I am no expert on the server specs but was told by my IT guy that it is a monster. Are there any fixes or recommendations to make 10040 run faster. We have not even gone live yet and are just keeping the system up to date with converted history for testing purposes. All history is loaded and posted as normal sales invoices and credits and payments also loaded and posted through a cash receipts journal. We fear what will happen when we keep adding months of date.

Ed

Comments

  • davmac1davmac1 Member Posts: 1,283
    If you look at the code behind the report, you will see that it is writing the customer ledger entries for each customer to a temp table, then processing it, then deleting the entries and moving to the next customer.
    If most of your customers do not have an outstanding balance every report cycle, you can speed up the report by using flowfields to see if the customer has a balance and bypassing the customers that have a zero balance.
    The downside is that if a customer has offsetting debits and credits, they will not appear on the report, but there are other ways to check for that.
    This could make the report run in quarter of the time, with very little development work.
    If that is still not good enough for you, you can do the report using sql instead - but not as easy.
  • since2002since2002 Member Posts: 25
    Thanks for the reply. The reports slows down on the big customers with balances (Walmart, Lowes) so skipping the customers without balances will have a minimal effect. Running for a single cusotmer like Walmart or Lowes takes over 5 minutes. I also can understand the logic behind the report (being able to run aging for 12/31/12 for example) and the logic involved to getting the AR as of a past date but just didn't expect it to take over any hour before we have ever gone live. We will end up doing something custom for sure. I just like to pose the question to see if anyone else has the same issue. If someone comes back and tells me the report is a piece of junk then that makes me feel better. Or possibly someone has some come up with a coding solution to speed it up.


    I have been working with 10040 for 10 years from 2.6 to 2009R2. I am used to maybe 50,000,000 item ledger and value entry records but this is my first installation with 2,000,000 customer ledger records and the detailed cusotmer ledger to go with them. If we have to we can live without the detail and base something off the Simple Aging Report. We will just add some more buckets and customize the date range for the buckets.
  • davmac1davmac1 Member Posts: 1,283
    Another thing that speed it up, is to clone the customer ledger entry table for the temporary version and eliminate all the keys not needed for the report.
    Since you are inserting all the customer ledger entries with remaining balances, this will speed it up some.
    You could also run this report on the server since it is reading so many records.

    It reads every single customer ledger entry record so it can do a calcfields on the remaining amount after applying the date filter. So a workaround has to take that into account.

    You could do a filter on closed at date to eliminate the older entries.
  • bbrownbbrown Member Posts: 3,268
    davmac1 wrote:
    Another thing that speed it up, is to clone the customer ledger entry table for the temporary version and eliminate all the keys not needed for the report...

    I don't see this help in any significant way. If at all. What's your thinking here? Maybe I'm not seeing something.
    There are no bugs - only undocumented features.
  • davmac1davmac1 Member Posts: 1,283
    The few keys you have on a table, the faster the insert operation on the older releases. Since the aging report is writing to a temp table, this may have an impact on the newer versions as well - I did it once to improve the speed on NAV 5.0 executables, prior to SIFT tables.
    The temp tables are local to each user, so I cannot say I am sure reducing keys will have an impact on them. Possibly reducing record size in the cloned table would have a bigger impact.

    My timings on 5.0 were a speed up of a factor of 4.
    Since this is a newer release that uses managed views, it may require some different tweaks.
  • bbrownbbrown Member Posts: 3,268
    Interesting, I'd not though about that for a temp table. I'd agree that fewer keys (indexes if talking SQL) would result in faster inserts in a real table. But we're talkign about a temp table here. Which is not a physical table, but rather a record set stored in memory.
    There are no bugs - only undocumented features.
  • davmac1davmac1 Member Posts: 1,283
    Larger recordsets get written to the local NAV temp directory.
    I found it increased performance.
    With the service tier it gets written to the service tier's NAV temp directory.

    It is a simple change and worth the effort when there is a big performance problem.
    With large customers like Walmart, the temp table should be writing to disk.

    If the NAV temp directory is set a SSD drive, that may increase performance.
  • Alex_ChowAlex_Chow Member Posts: 5,063
    If you don't need the report to be able to back date, then you can create a new report that will run a lot faster by eliminating the use of temp tables.
  • since2002since2002 Member Posts: 25
    I changed the Customer Ledger Data item to only read OPEN entries and it went from 1 hour to 5 minutes, This forces the report to be just as of today. But I made this change with a cloned version so the original can still be run for historical aging. The option to "export to excel" is still taking forever but that is most likely RTC to local excel.
  • Alex_ChowAlex_Chow Member Posts: 5,063
    Glad it worked out for you.

    It is still possible to backdate this report with if you incorporate the Detailed Cust. Ledger Entry table.

    Honestly, I don't know why Microsoft wouldn't incorporate the use of the Detailed Cust. Ledger instead of using the temp table method. Seems to be a lot faster and more logical.
  • davmac1davmac1 Member Posts: 1,283
    Writing many cells to Excel instead of csv is slow.
    There was a post a few years ago about writing Excel by row instead of by cell.

    http://www.mibuso.com/forum/viewtopic.php?t=26657

    If you want to speed up the write to Excel, you could try this out.

    On the aging, if NAV updated the closed date for all ledger entries, you would be able to use that. If you are ambitious, you could add a mod to change the closed date for the closing entries as well. Or if you don't want to mess with standard NAV logic, add another field to be used for this purpose.
  • davmac1davmac1 Member Posts: 1,283
    I have done some additional research on temp tables and some experimenting.
    Apparently the temp table has a limit of 2GB, which means it could all stay in memory if your workstation has enough RAM. (I don't know how sophisticated the temp table logic in NAV is, but I hope it would use all available memory before writing to disk.)
    I had one program that made repeated use of a temp table. Moving it from my reasonably high performance Windows 8 laptop with 8GB RAM to my much higher performance desktop with 32GB RAM with a SSD O/S drive and the SQL database on a 10K drive (vs laptop 7.5K), my performance tripled.
    I then tried running a large detailed A/R Aging report that used to take a couple of hours in an old environment, and it took 20 minutes, and my original speed improvement code made a very minor difference to the time when I ran my "fast" version.

    Now, in my environment everything is local - no network traffic issues, and no competition from other users.
    It would be interesting to see the times for running your huge A/R report directly on the monster SQL Server and also on a monster PC running NAV retrieving data from your SQL Server.
Sign In or Register to comment.