I am working on a project where navision/sql seems slow when doing postings and when pulling some records. The server has two 3.2gig processors 8gb of ram a raid 10 and raid 1 drive the mdf files are on the raid 10 drive and the log files are on the raid 1 mirrored drives. It is a2k3 server. Server connects to a gb switch and the 40 +- clients connect to a 100mb switch that has a 1gb uplink to the gb switch. In anycase one select statment returns 25000 reads when i see it in profiler not alot of cpu time or anything though. Should I work on this table a clustered index or clustered and non clustered. The main problem is postings orginally bathches of 2000 where being done and this slowed down everyone significantly, now the bathes are being broke up into 250 at a time. The only slow down I can think of here is i/o time for the disk which should be optimized since the log files are being written to a seperate raid and the db's that are being written to are striped across 4 drives. The database size is about 12gb we actually have a 8gb training database and a 12gb live database. Also is there a way to set logfiles to grow to like 512mb then start overwriting or do i just have to dump them and start at new one I'm thinking I could configure a scheduled task to do this if I have to but I prefer the idea of just overwriting the logfile after it grows to a certain size. Is anyone out there tried putting sp4 on the sql server? I am about to try this but hate to try it on a live site without some testing or feedback first. Would breaking the database up into a coupble of ndf files help maybe like 2gb or so each and spread them out on seperate drives? Im pretty sure the network and hardware is optimal. One last question does navision makes its own temporary store procedures ive seen some sp_exec 3x, xx, xx, xx
If i cut and past this in query analzer it can't find the stored procedure. Any input or help is greatly appreciated.
Joey Caldwell A+, Network+, CCNA, MCSE 2k & 2k3, MCDBA
0
Comments
Navision:
Consider disabling the SQL maintenance of SumIndexes. Recalculating these values can add significant overhead to SQL. Turning of the ones used for reporting only can add performance. I had a system a couple of years ago that was posting purchase orders of 400 - 500 lines. Turning off SumIndexes helped.
Hardware/OS:
SQL Service Pack 4:
There is a known issue with large memory servers. There is a patch available. See Microsoft Knowledge Base Article 899761. I have had SP4 running for a few months at 2 clients plus my development machine. I have not seen any issues.
Is the logfile on a dedicated drive? By dedicated I mean that no other files are on this drive.
Are you expanding the database/logfile manually or relying on autoqrow? Autogrow can seriously impact performance if it occurs during peak system usage.
Is SQL aware of the extra memory? (Enterprise?)
With the large transaction sizes, consider expanding the number of drives in the RAID 10 array. Along with writing new data, a posting involves a large amount of reads. Spreading these reads across more drives will help performance.
I do not recommend modifying the indexes outside of Navision.
There is not a way to set a growth limit on log files. The solution is to trunctate the file more often.
Consider moving the samller datafile (*.mdf) to its own RAID 1 drive.
Navision does not use stored procedures. The Navision C\Side statements are converted to SQL by the Navision SQL Client and executed.
Hope this is helpful.
Do not rely on Autogrow. Watch an grow the files manually. Use AutoGrow only as a safety net.
Consider disabling SQL maintenance of unneeded indexes. Indexes used only for reporting do not need to be maintained in SQL.
Review the Navision code and use of indexes. Set a key before any database calls and set filters for all fields in key. Use the performance toolkit to assist you here.
Implement a periodic rebuilding of indexex.
(i am headed out for the day. I will check this post when I return).
Divide the HDDs to more separate spindles in Raid 1.
If you are not backuping transaction logs, do not use them and switch to Simple recovery mode.
If you have CPUs with Hyper-threading, disable the HT (somewhere in BIOS or elsewhere). It is not good for DB servers (deadlocking between threads of one application).
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
For the rest, others have given you good hints. I can only add one:
To keep SQL fast, you need to rebuild the SQL-statistics at least once a week. You might do this together with the backup of the DB.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
Disk system performance is directly tied to the number of physical drives and read/write heads available. Partitioning drives requires the system to share the available drive time across the file systems.
Yes, you should backup the transaction log before truncating it. The SIMPLE recovery mode does not disable the transaction log, it jsu truncates it on checkpoint. A large transaction can still result in a large log.
It supports a number of SQL Server tuning enhancements in Navision.
http://mibuso.com/blogs/davidmachanick/
Second: the way it's implemented using their Native database is great and fast. If you can keep the numbeer of users to a maximun of 50 and moving all batch postings to none working times.
Third: Now they are using other DB-engines like SQL Server, all modern DB engines are focusing on client-server technology, this is not how Navision does it. Navision is still a fat client (where all code is handled) and the DB-engine only processes the queries past to it.
Take a look at the processor time used by the SQL server and you wll see it's minimal. Looking at the amount of network packages it will be a multitude of the processing time.
While everyone is waiting for the version where the C/AL code is based on .NET technology I'm waiting for the version where the major posting units (cu 80 & 90) are moved to the (SQL) server as stored procedures. This would really move Navision to the top of ERP applications.
I like Navision and especially how workflow processes are handled, I guess this the final optimization will be moving it to true client server application with all proceses already implemented.
I know this not going to give any answers, but it's something to remember, (hardware won't solve all your problems).
http://home.casema.nl/mvandermeij