Performance Issues

ADARTIHSADARTIHS Member Posts: 78
Version: NAV 2009 SP1
Database : SQL 2005

Hi Everyone,

At present I am importing history for a company due to go live very soon. The performance of the system was fine until I imported alot of historical transactions:

The database size is now 50GB

The server spec is
OS: Windows Server 2008 Std
Processor: Xeon E3113 @ 3.00Ghz (2 Cpus)
Memory: 4gb

How can Improve the speed of the following:
1. Posting Transactions
2. Looking at Dimension Analysis views

I am not a SQL expert and your help would be useful.

Thanks

Comments

  • Alex_ChowAlex_Chow Member Posts: 5,063
    50gb of historical transactions?!?! :shock:

    What tables are you importing?

    Nonetheless, you have to optimize your indexes so it can post faster. This means you need a NAV SQL guy to do some optimization.

    You can also look at the optimization manual from Partnersource and learn to become a NAV SQL guy yourself.
  • AndwianAndwian Member Posts: 627
    Alex Chow wrote:
    50gb of historical transactions?!?!

    What tables are you importing?

    In line with Alex Chow, are you importing all the historical data, such as Ledger Entries table?
    Usually, we just import the End of Period Balance, to the Journal, and then post it, and let it create the Ledger Entries by itself. I never import the detail historical transaction, i.e. each shipment, etc.
    Regards,
    Andwian
  • strykstryk Member Posts: 645
    Well, I suggest to search MIBUSO about "NAV SQL Performance" and you'll fibd gazillion of advices & recommendations.
    There are many reasons for degraded performance, thus, there are plenty of possible solutions, too.

    For a start, I also dare to mention this:
    http://dynamicsuser.net/blogs/stryk/default.aspx
    http://msdynamicsbooks.com/product_info.php?products_id=130&language=en&osCsid=c43beba9c2b0f50dffa2e1a65db49539

    And if you are in need of a NAV/SQL guy, well, I guess I could recommend one :wink:
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • gvolkovgvolkov Member Posts: 196
    Usually after you run alot of transactions in SQL, in your example imports, the transaction logs become very large. There are many ways to optimize your SQL Server, and it would fall far beyond the scope of this forum. One thing i can tell you without knowing more about your setup is to place your transaction logs, tempdb, and the database itself to separate volumes. Also, you may want to split the transaction logs for better performance.
    Microsoft Certified Technology Specialist
    Microsoft Certified Business Management Solutions Professional
    Microsoft Certified Business Management Solutions Specialist

    http://www.navisiontech.com
  • strykstryk Member Posts: 645
    gvolkov wrote:
    Also, you may want to split the transaction logs for better performance.

    Uhmm ... I disagree with that one. Splitting the Log into multiple files is IMHO pointless, as the log is not acessed parallely - it's purly sequential access, all LSN in strict order, no way to use multiple files ...
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • gvolkovgvolkov Member Posts: 196
    Splitting the logs on the RAID array creates better performance due to the fact that data is equally split between the media. You also need to format the RAID array properly to get maximum performance.
    Microsoft Certified Technology Specialist
    Microsoft Certified Business Management Solutions Professional
    Microsoft Certified Business Management Solutions Specialist

    http://www.navisiontech.com
  • bbrownbbrown Member Posts: 3,268
    gvolkov wrote:
    Splitting the logs on the RAID array creates better performance due to the fact that data is equally split between the media. You also need to format the RAID array properly to get maximum performance.

    Sorry, but I have to agreee with Stryk on this one. The transaction log is a sequantially written file. There is never any random I/O to this file. Stripping over multiple drives is only an advantage with random I/O. If you span your transaction log over multiple disk, the individual files are still written sequentially, one at a time. The system will write to file 1 till it is full. Then it will write to file 2 until it is full, and so on. Once all files are full it will return to file 1 and expand it. The write thill it is full, and move to file 2, and so on.
    There are no bugs - only undocumented features.
  • bbrownbbrown Member Posts: 3,268
    The key point with the transaction log is to place it on a dedicated physical disk. That means it is the ONLY file on that disk. Also note that I said "physical Disk", not volume or partition. Placing the file on its own disk insures the maintenance of proper head positionling for each write. If the disk is servicing other files, a performance impact is seen as the drive heads must be repositioned for each write.
    There are no bugs - only undocumented features.
  • gvolkovgvolkov Member Posts: 196
    Gentlemen, could be i am wrong...

    We do however have the transaction log split in two, and it is sitting on a dedicated 2 Disk RAID1

    I promise i will post the reason we did that. Just be patient - it will be worth it 8) i know it was performance related, but cant remember why.
    Microsoft Certified Technology Specialist
    Microsoft Certified Business Management Solutions Professional
    Microsoft Certified Business Management Solutions Specialist

    http://www.navisiontech.com
  • bbrownbbrown Member Posts: 3,268
    Is each file sitting on its own RAID 1 disk (that's how it should be)? Or are they both on the same RAID 1?
    There are no bugs - only undocumented features.
  • gvolkovgvolkov Member Posts: 196
    Okay, both files are on the same RAID array. First file has fixed size without autogrowth to avoid log fragmentation. Second file is with autogrow, so if it becomes fragmented it can be easiliy fixed (drop the file and recreate)
    Microsoft Certified Technology Specialist
    Microsoft Certified Business Management Solutions Professional
    Microsoft Certified Business Management Solutions Specialist

    http://www.navisiontech.com
  • bbrownbbrown Member Posts: 3,268
    I'm curious to hear the "performance" reasoning behind this design. Honestly, I can't think of any. In fact, I see many things here that have potential to hurt performance.

    BTW - A transaction log should not be auto-growing. Auto-grow is only a safety net. If your log is auto-growing on a regular bases, it is not sized properly. This also applies to data files. Auto-grow is another "feature" that can have a serious impact on system performance.
    There are no bugs - only undocumented features.
  • IsakssonMiIsakssonMi Member Posts: 77
    When querying a database with 50 Gb of data I guess increasing the ram should make a differ. 4 Gb seems a little low for your server. I guess you running x64. Most SQL-queries from NAV creates a lots of sub-queries that takes a lot of memory and in your case lots of disk-swappning may occur.
  • krikikriki Member, Moderator Posts: 9,110
    bbrown wrote:
    I'm curious to hear the "performance" reasoning behind this design. Honestly, I can't think of any. In fact, I see many things here that have potential to hurt performance.

    BTW - A transaction log should not be auto-growing. Auto-grow is only a safety net. If your log is auto-growing on a regular bases, it is not sized properly. This also applies to data files. Auto-grow is another "feature" that can have a serious impact on system performance.

    There is a good reason for performance here.
    The first transactionlog is created and sized correctly that it should be able to contain all changes without problems. This file is fixed in sized, should not be physically fragmented and not having too many VLF's in it. This guarantees best performance on this file.

    BUT it is fixed width. So if something unexpected happens, the system would be blocked.

    For this reason we create extra log file that is very small but has autogrowth on it (it is called an overflow log). So when the primary logfile is full, this second logfile can grow as needed. Afterwards the DBA has to check WHY it grew, fix that problem and shrink (or drop and recreate) again the overflow log.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • bbrownbbrown Member Posts: 3,268
    Interesting.
    There are no bugs - only undocumented features.
  • ADARTIHSADARTIHS Member Posts: 78
    Thank you all I have a lot of solutions to look at. I will keep this post up to date regarding which one of your solutions had the most impact regarding speed.

    Thanks

    ADARTIHS
  • bbrownbbrown Member Posts: 3,268
    Good luck on your investigation. Keep in mind that performance issues are rarely the result of a single problem. They are typically the cumulative result of many small issues. You may find that many (or all) of the issues mentioned here are aplicable. Along with possible others not mentioned.
    There are no bugs - only undocumented features.
  • jlandeenjlandeen Member Posts: 524
    On thing that I think has been overlooked here in quest to cover off the technical performance impacts is WHY is this much historical data being brought into the core NAV database to begin with?

    Are 50gb worth of historical information really required in the current NAV database which will be used to push through real time transactions? I would look at alternative ways of meeting the customer/user/client's requirements WITHOUT having to place that much data in the current NAV database. Can a seperate reporting/historical database be created?
    Jeff Landeen - Sr. Consultant
    Epimatic Corp.

    http://www.epimatic.com
  • bbrownbbrown Member Posts: 3,268
    What's the difference if the 50 GB is loaded on day 1 versus accumulated for several months or a couple years? If the system can't handle it now, what expectation is there that it will handle it in the future? 50 GB is not a big database.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,305
    I think there is WAY too much focus on database size. SQL Server won't even break a sweat until database size hits well over 1 or 2 TB. Most performance issues in NAV stem from transaction volume and concurrence. Well that and of course the very root cause that the communication between the NAV client and SQL Server is EXTREMELY inefficient.
  • jlandeenjlandeen Member Posts: 524
    I don't think SQL has a problem with data sizes but NAV databases can be negatively affected by data size. Think of all of the list forms and look ups that pull in lots of data and aren't necessarily optimized.

    I do agree that transaction volume and concurrenct transactions can play a huge part in performance and probably mohreso than database size. However I think when you have the option of brining in additional data (such as historical transactions, or other older data) it's at least a good idea to ask some questions and check whether or not the data is truly needed in the main NAV database.

    If the data really is necessary in NAV than the client/NSC should make sure that there is adequate time and budget set aside to ensure that any necessary performance tuning or application optimizations are made so that the additional data does not negatively impact performance (which I think a lot of the posts cover here already).
    Jeff Landeen - Sr. Consultant
    Epimatic Corp.

    http://www.epimatic.com
  • bbrownbbrown Member Posts: 3,268
    Seems every day I turn around someone is talking about running data compresssion on the 20 GB database, because "it will improve performance". My response is always "what will you do when you collect the next 20 GB?". If your NAV environment can't handle a 50 GB database, your problems are deeper then the fact that the cleint is asking to load it. How long will it be before the DB has that much data anyways?

    One of the best performing NAV systems I work with, also happens to be one of the larger databases that I work with.

    300 GB used
    130 Million GL Enties
    70 Million Value Entries
    400,000 Prod Order Lines
    1.5 million Component lines
    etc...
    And that's in less then 4 year.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,305
    jlandeen wrote:
    I don't think SQL has a problem with data sizes but NAV databases can be negatively affected by data size. Think of all of the list forms and look ups that pull in lots of data and aren't necessarily optimized.
    That has very little to do with the size of the database itself or how much history is in there. Properly designed forms (using the right keys, and the right filters, and the right tableplacement, and the right number of flowfields) are extremely responsive, even with hundreds of millions of records.
    jlandeen wrote:
    I do agree that transaction volume and concurrenct transactions can play a huge part in performance and probably mohreso than database size. However I think when you have the option of brining in additional data (such as historical transactions, or other older data) it's at least a good idea to ask some questions and check whether or not the data is truly needed in the main NAV database.
    I'm going to have to disagree with you there Jeff, because it makes very little difference whether you have 2 or 12 years of history in the ledgers in how well your system performs. Having 20 GB of historical data or 200 GB makes no significant difference in transaction speed.

    I know of a company that just upgraded, and they decided to leave behind significant amount of history for performance reasons. Their old database was almost 200 GB (about 7 years of history), the new one a little more than 15GB (about 6 months of compressed history), and when they went live they had performance issues in the same functional areas as before. Nothing to do with the amount of history, everything to do with transaction volume and concurrency.

    What I am trying to say is that you can have a very small database that performs terribly, and a very large database with years and years of history that pefrorms very well. Database size is simply overrated as a cause of performance problems.

    Let me say it this way.... if you have performance problems on SQL Server, and the first thing your partner says is "we need to get rid of all this data" or "that is because you have so much history in there".... that is a clear indication that they don't really know.
  • jlandeenjlandeen Member Posts: 524
    Clearly there are cases where the amount of data in the system. This whole thread was stared based upon a site that ran into problems when a large volume of historical data being loaded into their database.

    In general I think it's a good idea to investigate what is the root cause of performance issues (index probelms, disk configuration, server resource problems, poor application coding, etc.) and try to solve that. That wasn't my point - if a client or a NSC is going to spend time and energy analyzing how a system performs they should also analyze the amount of data being put into that system and whether or not it's necessary.
    Jeff Landeen - Sr. Consultant
    Epimatic Corp.

    http://www.epimatic.com
  • bbrownbbrown Member Posts: 3,268
    What I'm saying is that whether the 50 GB of data is loaded on day 1 or accumulated over time is irrelevant. Eventually it wil be there and the system must be able to handle it. I do agree that the NSC (and client) need to plan for this. One of the biggest reasons that projects go bad is a lack of understanding of scope. Sometimes on both sides.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,305
    edited 2010-08-12
    jlandeen wrote:
    Clearly there are cases where the amount of data in the system. This whole thread was stared based upon a site that ran into problems when a large volume of historical data being loaded into their database.
    My point was that it is probably NOT the 50GB of data that is the root cause of the performance issues. Simply having 50 GB of data in your system is NOT going to cause performance problems.

    Now, ACCESSING that 50GB might be problematic, but the mere presence of that data is just NOT the cause of performance problems. Getting rid of the data might take away a symptom, but it certainly will NOT address the actual performance problem.
  • bbrownbbrown Member Posts: 3,268
    DenSter wrote:
    jlandeen wrote:
    Clearly there are cases where the amount of data in the system. This whole thread was stared based upon a site that ran into problems when a large volume of historical data being loaded into their database.
    My point was that it is probably NOT the 50GB of data that is the root cause of the performance issues.

    That too!
    There are no bugs - only undocumented features.
  • ADARTIHSADARTIHS Member Posts: 78
    Hi Guys,

    Thanks for you help but I went direct to MS and they did actuall tell me about one simple change which made all the difference:

    Table 7312 Warehouse Entry table amend the 7th Key, Enable both MaintainSQLIndex & MaintainSIFTIndex.

    This imporved performace posting stock invoices by about 800%.

    WOW what a difference. I then looked at the large tables and removed unwanted keys.

    Thanks for all your help and recommendations.

    Regards

    ADARTIHS
Sign In or Register to comment.