Hello experts. Trust me, I have searched but I'm not sure that I've found (or understood) the answer I'm looking for. :?
I have Navision 4.0Sp3, running on 64-bit SQL Server 2005, with about 60 users. I am having problems (as most seem to) with records (or tables) being locked and causing user problems. Most often it seems to be the "Reservation Entry Table" as this, I assume, is continually being written to. For example, every time a Sales Line is entered, a Production Order is released and so on, it writes to the reservation table. This causes even more frustration that just poor performance - if you are entering a sales line and you get the dead lock error, pressing OK on the error returns you to the sales order with the line deleted and you have to type it all in again, only to get another lock!
Initially I was convinced this is a setup / optimisation problem as there must be a solution otherwise the system is next to useless. I never had this issue with other ERP systems I have used. However, seeing all the posts on here about locks makes me wonder whether it is a system weakness.
I am and end-user, not a reseller / partner.
If anyone could give me any pointers I would really appreciate it. ](*,)
Thank you!
0
Comments
In order to avoid deadlocks you need to do several things. From the customer's part you will need to optimize your system in order to keep transactions as short as possible. Then you'll need a developer to investigate the situation (analyze the code) and see if there are locking order violations. Your problems can be solved but it requires some work from both your IT department and your provider...
First of all you should check the Lock Timeout Duration in Database Setup in Navision.
By Default it is set to 10 Seconds. Increasing this may help as the system will wait longer before throwing the error message.
Also check the Always RowLock property. It should be ticked for SQL Server as otherwise system tries to do TABLELOCK.
Harjot
I am the IT department, but not a Navision developer. I have already set the lock timeout to 20s and set to try and row lock. Maybe that's all I can do at my level.
Your partner should also do some SQL tuning on your DB. This includes disabling certain indexes/SIFT-levels so the server has to write less info and thus speeds up a lot (this HAS to be done in NAV and NOT in SQL!!!!!).
Some questions:
-how big is your DB?
-How much memory has been assigned to SQL?
As you are in the IT department, you can do some things:
-Make sure your transaction log is on a dedicated (fast = RPM15000) drive (NO OTHER FILES SHOULD BE THERE!). If there are other things, move them to other drives. Your transaction log should be on a RAID1 or RAID10 and definitly NOT on a RAID5.
-Make sure the TL is not fragmented and never shrink the file to avoid fragmentation and loss of time when it has to grow again.
-Each night (if possible) do a rebuild index. If this is not possible, then at least recalculate the statistics each night.
-Put the fillfactor of the server to 95%.
-Make sure that the server is NOT swapping memory to disk
-Put DB-properties (in SQL) autocreate statistics and autoupdate statistics to FALSE (but only if you can regularly do a rebuild index).
-Install latest hotfix of NAV 4.0SP3 (before doing this, test it in a testenvironment!).
-Install SQL2005 SP2 with at least hotfix 4 (best take the latest) (before doing this, test it in a testenvironment!).
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
I have a SQL maintenance plan to optimize and rebuild indexes, I run the Navision "optimize tables" from time-to-time
The db is 10Gb
The transaction log is on a pair of mirrored (RAID1) 15k drives; no other files
The database files are on a RAID10 15k volume
- how do I make sure the TL is not fragmented? Simply by defragging the volume with the database services stopped?
Statistics recalculated and index rebuilt each sunday. I could change to each night but will it matter if users are on the system? We have some users on night shift.
- what is the "fill factor" of the server and how do I set it?
The Server is not swapping memory
I can set the autocreate stats to FALSE pending the answer to above(!)
Latest hotfix for Nav is on.
Not applied SQL SP2 yet, but I will.
Many thanks for the help so far - appreciated.
RIS Plus, LLC
RIS Plus, LLC
=> With defrag, you can control if the TL is fragmented. If it is, defragment it after you stopped the DB-services.
-Statistics recalculated and index rebuilt each sunday. I could change to each night but will it matter if users are on the system? We have some users on night shift.
=> you first have to check how long a rebuild takes. If you know you have a window of that time when no one works, you can do it in that moment.
=> If not, you cannot launch the rebuild, because it blocks the tables. You might try with SQL defragging that does NOT block users, but it doesn't update the SQL statistics either. In this case, you should also run the SQL statistics each night in that window and a rebuild once a week.
=> Depending on what you can do, you can disable the autostats or not.
- what is the "fill factor" of the server and how do I set it?
The fillfactor defines how full the pages are. (consider the telephone book) If a page is 100% full, reading is at its best performance, but inserting a record takes a lot of time, because you need to insert a page in the book and that takes some time. If a page is 1% full, inserting new records is no problem at all, but reading will be very slow (like turning a page before you can see the next phone number). My experience is that 95% is better than 90% for a NAV DB (MS advices 90% but the default of SQL is 100% [this is the same as 0%]).
=> you can find it on the properties of the SQL server => database settings => Default index fill factor. You can change it without restarting the server. The rebuild will then use this value (if not changed in the maintenance plan).
All these performance improving things will also diminish the deadlock problems. If after this (and the index+SIFT tuning), there are still deadlocks, there will be less and one can start searching for them and solve them (in general, the searching is not so easy and it is slow).
PS : a deadlock is NOT the same as a locktimeout! the message you get is completely different.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
RIS Plus, LLC
If I have a system that generally performs well, but you have deadlocks, then I check the procedures involved to find the problem and fix it.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
I'll try out the suggestions and report back. 8)
RIS Plus, LLC
Sincere thanks to everyone who responded. I have implemented all of the suggested changes and things are much improved. It's difficult to exactly which enhancement made the biggest difference - with this machine being in a production environment I didn't really have time to mess about - I just took it off-line for a while and implementted the suggested changes.
With regard to the "always rowlock" shouldn't SQL 2005 do that natively? i.e. the SQL server itself will try and rowlock unless told otherwise?
Anyway, I can have a bit of peace and quiet from the users now! =D>
That is a HUGE mistake, and one that is going to come back and bite you in the future.
You do not (and did not) have a performance problem with your system. You had many small issues that needed to be resolved (not one big one). You have fixed some of them and probably also made some worse.
In addition, the biggest issue (the customized code) you probably have not even addressed.
10gig for 80 users in Manufacturing is a small database, so either you are just starting out, and its going to grow, or there are some major design issues with the code.
I fully understand the issue of "fix it now" but really you need to identify what was fixed and what was not, so that as the DB grows and the problems come back again you are able to solve them the next time.
Sorry to be the bearer of bad news when all seems to be going well, but do be prepared for the future.
I can't see how simply doing the optmisations is a *HUGE* mistake, given that they should be carried out anyway. Everyone agrees that regular rebuilding of indexes and stats is a good thing, and that's really all I've done. The only difference I've made to the actual database itself is changing the fill to 95%
If running database optimisations is "going to make matters worse" I'd be grateful if you could elaborate.
How do you know the customised code is the biggest issue? I didn't even refer to any customised code.
noooo you completly missed the point.
The huge mistake is that you don't know what worked. I am sure you did all the right steps, BUT allways you MUST make small changes and measure their effect. Then document each individual change so you know what worked and what didn't.
In your case you should have made one change each night over a week, so that you know which change works. That way next time you have issues, you know where to start, instead of just guessing.
Set the fill to 95%
Re-indexed the database
Re-built the stats.
That's it. I couldn't really be more granular. Not only that, setting the fill to 95% would have no effect without doing the other two steps anyway.
If I had done stuff like moved my tranasction logs to another drive, changed a load of codeunits, upgraded to SQL 2008 and so on, all at once, then I agree it would be impossible to say what the root cause and solution were or even if that 'solution' simply masked the problem for a time.
But all I did was use the normal optimise routines (which were on a maintenance plan which had stopped running because of the memory leak fixed in the roll-up)
Question:
1) Is that to be expected
2) I'm right in thinking NOT to shrink the TL
3) Now it has grown to that size (of which 99% is now free following a backup) will it have sufficient space to run the update stats next time or will it get bigger still?
Thanks #-o
Of course, you can shrink the file to some size which will cover needs of common DB usage. The free space will be reused in the file...
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.