NAS causes SQL Log file to explode in size

rsaritzkyrsaritzky Member Posts: 469
Hi,

I have a NAS process that calls a function that is stored in a report object. The report object reads through open purchase order lines and summarizes them by g/l account and date, then writes them to a summary table. The summary table is used in an external Excel query.

The functions are stored in a report object because the report can be run "real time" to provide a printed summary, in addition to storing them in a summary table.

No problems occur when the report is run "real-time". However, the NAS process, which is timer-triggered to run twice a day, caused our SQL transaction log file to balloon in size from approx 300mb to more than 50gb, maxing out the disk drive's capacity.

The NAS report is called by a "traditional" timer-loop codeunit that has been in-place for several years (same loop program calls several functions on a timed bases). No server event log entries were recorded indicating a problem. Nothing in the SQL Server log either.

Has anyone experienced this issue, or know of a problem calling a function stored in a report codeunit? All the other functions called from NAS are stored in codeunit objects. The reason I did not want to do this is that the code is 100% identical for producing the printed report.

Thanks

Ron
Ron

Comments

  • vaprogvaprog Member Posts: 1,139
    I duobt the fact that the function resides in a report object has anythin to do with it. But you can check by creating a copy to a codeunit and see what happenes when you call the copy in the codeunit.

    I'd use the client monitor or SQL tracing to see what happenes SQL wise.

    What other tables, other than the summary table are you writing to?
    Can you sum up the values in a temporary summary table before you write it to the database? (Though i dont't think changes to that table alone are responsible for the excessive growth of the transaction log.
  • rsaritzkyrsaritzky Member Posts: 469
    vaprog wrote:
    I duobt the fact that the function resides in a report object has anythin to do with it. But you can check by creating a copy to a codeunit and see what happenes when you call the copy in the codeunit.

    I'd use the client monitor or SQL tracing to see what happenes SQL wise.

    What other tables, other than the summary table are you writing to?
    Can you sum up the values in a temporary summary table before you write it to the database? (Though i dont't think changes to that table alone are responsible for the excessive growth of the transaction log.

    All good questions/comments. What's the current feeling on setting AUTO UPDATE STATISTICS to OFF vs. ON, and would setting it on create additional transaction log entries.

    The process is pretty simple: It goes through posted purchase lines and checks to see if that line has been billed to the customer (cross-reference is built into sales lines). If not, it writes a subset of the fields in the purchase line table to a custom table - Doc No., Vendor no. Doc date, G/L acct and amount, plus aging information.

    I'm also going to do exactly what you proposed in moving functionality to a codeunit vs. report and watch if via SQL trace.

    Thanks for the ideas.

    Ron
    Ron
Sign In or Register to comment.