Version: NAV 2009 SP1
Database : SQL 2005
Hi Everyone,
At present I am importing history for a company due to go live very soon. The performance of the system was fine until I imported alot of historical transactions:
The database size is now 50GB
The server spec is
OS: Windows Server 2008 Std
Processor: Xeon E3113 @ 3.00Ghz (2 Cpus)
Memory: 4gb
How can Improve the speed of the following:
1. Posting Transactions
2. Looking at Dimension Analysis views
I am not a SQL expert and your help would be useful.
Thanks
0
Comments
What tables are you importing?
Nonetheless, you have to optimize your indexes so it can post faster. This means you need a NAV SQL guy to do some optimization.
You can also look at the optimization manual from Partnersource and learn to become a NAV SQL guy yourself.
AP Commerce, Inc. = where I work
Getting Started with Dynamics NAV 2013 Application Development = my book
Implementing Microsoft Dynamics NAV - 3rd Edition = my 2nd book
In line with Alex Chow, are you importing all the historical data, such as Ledger Entries table?
Usually, we just import the End of Period Balance, to the Journal, and then post it, and let it create the Ledger Entries by itself. I never import the detail historical transaction, i.e. each shipment, etc.
Andwian
There are many reasons for degraded performance, thus, there are plenty of possible solutions, too.
For a start, I also dare to mention this:
http://dynamicsuser.net/blogs/stryk/default.aspx
http://msdynamicsbooks.com/product_info.php?products_id=130&language=en&osCsid=c43beba9c2b0f50dffa2e1a65db49539
And if you are in need of a NAV/SQL guy, well, I guess I could recommend one
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
Microsoft Certified Business Management Solutions Professional
Microsoft Certified Business Management Solutions Specialist
http://www.navisiontech.com
Uhmm ... I disagree with that one. Splitting the Log into multiple files is IMHO pointless, as the log is not acessed parallely - it's purly sequential access, all LSN in strict order, no way to use multiple files ...
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
Microsoft Certified Business Management Solutions Professional
Microsoft Certified Business Management Solutions Specialist
http://www.navisiontech.com
Sorry, but I have to agreee with Stryk on this one. The transaction log is a sequantially written file. There is never any random I/O to this file. Stripping over multiple drives is only an advantage with random I/O. If you span your transaction log over multiple disk, the individual files are still written sequentially, one at a time. The system will write to file 1 till it is full. Then it will write to file 2 until it is full, and so on. Once all files are full it will return to file 1 and expand it. The write thill it is full, and move to file 2, and so on.
We do however have the transaction log split in two, and it is sitting on a dedicated 2 Disk RAID1
I promise i will post the reason we did that. Just be patient - it will be worth it 8) i know it was performance related, but cant remember why.
Microsoft Certified Business Management Solutions Professional
Microsoft Certified Business Management Solutions Specialist
http://www.navisiontech.com
Microsoft Certified Business Management Solutions Professional
Microsoft Certified Business Management Solutions Specialist
http://www.navisiontech.com
BTW - A transaction log should not be auto-growing. Auto-grow is only a safety net. If your log is auto-growing on a regular bases, it is not sized properly. This also applies to data files. Auto-grow is another "feature" that can have a serious impact on system performance.
There is a good reason for performance here.
The first transactionlog is created and sized correctly that it should be able to contain all changes without problems. This file is fixed in sized, should not be physically fragmented and not having too many VLF's in it. This guarantees best performance on this file.
BUT it is fixed width. So if something unexpected happens, the system would be blocked.
For this reason we create extra log file that is very small but has autogrowth on it (it is called an overflow log). So when the primary logfile is full, this second logfile can grow as needed. Afterwards the DBA has to check WHY it grew, fix that problem and shrink (or drop and recreate) again the overflow log.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
Thanks
ADARTIHS
Are 50gb worth of historical information really required in the current NAV database which will be used to push through real time transactions? I would look at alternative ways of meeting the customer/user/client's requirements WITHOUT having to place that much data in the current NAV database. Can a seperate reporting/historical database be created?
Epimatic Corp.
http://www.epimatic.com
RIS Plus, LLC
I do agree that transaction volume and concurrenct transactions can play a huge part in performance and probably mohreso than database size. However I think when you have the option of brining in additional data (such as historical transactions, or other older data) it's at least a good idea to ask some questions and check whether or not the data is truly needed in the main NAV database.
If the data really is necessary in NAV than the client/NSC should make sure that there is adequate time and budget set aside to ensure that any necessary performance tuning or application optimizations are made so that the additional data does not negatively impact performance (which I think a lot of the posts cover here already).
Epimatic Corp.
http://www.epimatic.com
One of the best performing NAV systems I work with, also happens to be one of the larger databases that I work with.
300 GB used
130 Million GL Enties
70 Million Value Entries
400,000 Prod Order Lines
1.5 million Component lines
etc...
And that's in less then 4 year.
I'm going to have to disagree with you there Jeff, because it makes very little difference whether you have 2 or 12 years of history in the ledgers in how well your system performs. Having 20 GB of historical data or 200 GB makes no significant difference in transaction speed.
I know of a company that just upgraded, and they decided to leave behind significant amount of history for performance reasons. Their old database was almost 200 GB (about 7 years of history), the new one a little more than 15GB (about 6 months of compressed history), and when they went live they had performance issues in the same functional areas as before. Nothing to do with the amount of history, everything to do with transaction volume and concurrency.
What I am trying to say is that you can have a very small database that performs terribly, and a very large database with years and years of history that pefrorms very well. Database size is simply overrated as a cause of performance problems.
Let me say it this way.... if you have performance problems on SQL Server, and the first thing your partner says is "we need to get rid of all this data" or "that is because you have so much history in there".... that is a clear indication that they don't really know.
RIS Plus, LLC
In general I think it's a good idea to investigate what is the root cause of performance issues (index probelms, disk configuration, server resource problems, poor application coding, etc.) and try to solve that. That wasn't my point - if a client or a NSC is going to spend time and energy analyzing how a system performs they should also analyze the amount of data being put into that system and whether or not it's necessary.
Epimatic Corp.
http://www.epimatic.com
Now, ACCESSING that 50GB might be problematic, but the mere presence of that data is just NOT the cause of performance problems. Getting rid of the data might take away a symptom, but it certainly will NOT address the actual performance problem.
RIS Plus, LLC
That too!
Thanks for you help but I went direct to MS and they did actuall tell me about one simple change which made all the difference:
Table 7312 Warehouse Entry table amend the 7th Key, Enable both MaintainSQLIndex & MaintainSIFTIndex.
This imporved performace posting stock invoices by about 800%.
WOW what a difference. I then looked at the large tables and removed unwanted keys.
Thanks for all your help and recommendations.
Regards
ADARTIHS