Hello Experts,
I am running Navision 5.0. Over the past few months, it has been running very slow.
I am going to make a new configuration of the db Server in order to resolve make it running quickly.
Data Base Info:
Database Used (KB): 38,810,624 (47%)
Database Size (KB): 81,960,960
With licence of 30 users.
Server Specs:
Processor: Two processors Intel Xeon 2.5GHz Quad Core
RAM: 8 GB
Disks: 8 physical disks x 176GB
Two disks with RAID1+0 For OS+SQL
Two disks with RAID1+0 For LOG
Four disks with RAID1+0 For DATA
Windows Server 2003 R2 Enterprise E 64-Bit
SQL Server 2005 Standard E 64-Bit
We are accessing directly, and for the distant site, we are using Terminal Server on another server.
What about this new config, it resolves a part of the problem or not?
Thanks in advance
0
Comments
But it also depends on how you configure that. You need to align the disks correctly (offset = multiple of 64KB, sector size=64KB, stripesize (for RAID10) = 64KB).
Between your SQL server and terminal server, you best have a 1Gbit connection.
Be sure you don't have fragmentation on your disks.
Be sure your TL-file doesn't have internal fragmentation.
Also a lot of other things can be done.
I suppose you use 5.0 without SP1. In this case you need to do some index/SIFT tuning.
Search the forum for more details. There is a lot you can find.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
:thumbsup:
We have 8GB memory installed on the server,and when I execute task manager it shows that all of this memory is used,specialy 7.4 is used by SQL process,is it normal ?
The extesion of memory can be better ?
Thanks for help
It is normal that SQL uses most of the memory. Actually,this SHOULD be the case in a dedicated server (and a server SHOULD be dedicated to SQL server).
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
When I create a new database I fixed the size of log file =2.5GO ,I put the recovery model to "Simple" and I
define a daily sql maintenance plan (Full backup,Reorganise+Rebuild indexes,Update statistics).
This config is better or Full recovery model with bakup of TL file???
Well, first of all I suggest that you search MIBUSO for "SQL Performance" - as this is a permanently discussed issue you'll get gazillions of advices and recommendations!
As Kriki said, the platform seems sufficent enough, but this also depends on the transaction volume you process. With NAV 5.0 (which version exactly?) you could be sure it is the application itself which causes most (all?) of that trouble ...
Reagrding the "Recovery Model": WIth SIMPLE the Log is immediately truncated when a transaction is committed, hence the Log is just used temporary. This actually should keep it quite small. But you are waiving the possibility to create Log Backups:
With FULL Recovery all transaction info remains in the Log until Log Backups was created; thus you HAVE to run Log-Backups to keep its size reasonable. The big advantage is, that with Log Backups you could restore the database up to the latest transactions, hence your potentialdata-loss in worst case is minimal.
See "Books Online" about details.
Maybe this one could also help you.
Best regards,
Jörg
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
Ups, forgot that one ...
You could check with DBCC LOGINFO. The amount of lines tells the number of VLogs, the "internal/logical fragments" so to speak. Should be something between 16 and 64.
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
Thank you for the reply,about our activity we are medical product distributor,so a lot of transactions per day :AVG = 7000 entries per day
I am going to by 8GB of memory for an extesion,what is your advice about that?
Thanks in advance
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
Also, certain user interactions - filters, wildcards, etc. - will generate queries which result in pretty high I/O ...
To make a long story short: the more RAM you have, the more data could be cached, the less physical access is necessary. Also, plenty of RAM makes the system more "forgiving" to bad queries ...
RAM doesn't cost a fortune anymore, so if you could afford it I'd go for 16GB+ ... the more the better.
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
They only had about 20 concurrent users which is probably why the quick fix worked.
http://mibuso.com/blogs/davidmachanick/