Hi Everyone - over the past week I have seen the performance of our Navision system diminish to the point where it is sometimes unusable. I have done a lot of digging, testing and performance tuning in attempt to solve these issues with no success. I'm getting frustrated, and of course so is the user base. Here is some setups and background to what I've tried/noticed.
SETUP
- Navision 4.0 SP2
- SQL Server 2000 Std running on Windows 2003 Std. R2
- Database and Log files on a FC SAN on seperate LUNs
- Database Files - RAID 1+0 on 8 discs
- Log Files - RAID 1+0 on 4 discs
- No recent hardware changes, code changes, system changes etc.
- Approx. 30 concurrent users
==================
1. Database & Log files have Plenty of Space (auto-growth is not an issue)
2. Physical Drives have Plenty of Space
3. Have run a FULL Navision based Optimization
4. Have updated all Index Statistics (via SQL)
5. AntiVirus and Monitoring (Argent) have been disabled (just to rule them out)
6. CHECKDB shows no issues with consistency
7. SAN is showing no issues - no degradation of disc failure (no SMART warnings)
Reads seem to be the problem. I interested a dummy GL Account are normal speeds. However Deleting the GL Account takes upwards of 5-10 minutes. Read Queues on the Data drive go through the ROOF.
At first glance I assumed it was an Index issue - however after rebuilding all of them - I'm not so sure.
Any help would be appreciated!
0
Comments
It sounds to me you don't have enough RAM.
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n
Open client monitor, under option check 'show sql statement' and 'show execution plan and sql index'. Start client monitor and delete another account. When it's done stop the client monitor. Now you can check individual SQL Statements and their duration. I don't know if you still have access to a partner license but that way you could use the enhanced client monitor and export it to excel. If I were to guess you will find where the execution plan shows clustered index scans where you really want to have index seeks. Those scans cause the system to browse the entire table rather than only retrieving relevant record sets. As a result, you have countless unnecessary reads, which will cause your storage system go through the roof.
In the OnDelete trigger of the G/L Account, there is a ton of code that checks various tables. Make sure that this code uses proper (i.e. existing and enabled) sort order. Find keys that match the filter criteria, and if they don't exist, create them.
RIS Plus, LLC
:?: If they are a partner then why would they no lnger have access, if they are not a partner why would they ever have had access?
:-k
DB is about 30GB - 8GB RAM on the box.
Thanks for the Info. I'll give that a shot. What seems strange to me is this happened right out of nowhere - performance has always been speedy throughout the system - then one day..boom.
I'll post results. Thx for the help. ](*,)
I did as you recommended and ran the Client Monitor - and it did indeed show a lot of DELETEALL statements using a clustered index scan rather than seek. Is this because (as you may have mentioned) that the onDelete() code is not using the proper key/or any key to find the data and resulting back to use the default clustered index.
See the attached images - they show the times on a few of the statements.
In checking the OnDelete() code in the GL Account table i see the following:
MoveEntries.MoveGLEntries(Rec);
GLBudgetEntry.SETCURRENTKEY("Budget Name","G/L Account No.");
GLBudgetEntry.SETRANGE("G/L Account No.","No.");
GLBudgetEntry.DELETEALL(TRUE);
CommentLine.SETRANGE("Table Name",CommentLine."Table Name"::"G/L Account");
CommentLine.SETRANGE("No.","No.");
CommentLine.DELETEALL;
ExtTextHeader.SETRANGE("Table Name",ExtTextHeader."Table Name"::"G/L Account");
ExtTextHeader.SETRANGE("No.","No.");
ExtTextHeader.DELETEALL(TRUE);
AnalysisViewEntry.SETRANGE("G/L Account No.","No.");
AnalysisViewEntry.DELETEALL;
AnalysisViewBudgetEntry.SETRANGE("G/L Account No.","No.");
AnalysisViewBudgetEntry.DELETEALL;
DimMgt.DeleteDefaultDim(DATABASE::"G/L Account","No.");
Most of which there is NO key set at all. For the Analysis View Entry table, there is only 1 key, which is the Clustered Index. Looks like there should be an additional key added to the table, just for GL Account No., they alter the code to use it. Mimic this logic to the other bad offenders.
Still seems strange that this would just start happening out of the blue. Maybe it hit its tipping point?
Let me know if this is the correct approach. Many thanks!
I would definately try creating a key on "G/L Account No." and do a SETCURRENTKEY in the OnDelete trigger, that should make a difference.
Wouldn't make a difference I don't think because it would still need to do a clustered index scan to determine whether the filtered set is empty. Come to think of it, if it DOES find something, you have to scan the table more than once, so I would not go there.
RIS Plus, LLC
If you want to use those 8GB as DBcache you need everything (hardware, Windows, SQL) on 64 bit. And best also upgrade SQL to SQL2008 (R2). And NAV to NAV200)SP1.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
No recent upgrades, still on the 4.0 SP2 executables.
I made the changes the OnDelete() trigger and it sped up the queries dramatically. Now its to mave forward on other areas - however, i wouldnt be surprised if much of the slow-down was centered around the Analysis Views. I have removed the Update on Post flag from them for now. I'm sure there is plenty of cleanup to be done in those tables anyway. Appreciate the help.
Yes that's true - however i've never seen a memory issue on the machine. The plan is to at least upgrade the Executable to 5.1 and move the DB to 2005 or 2008 - full upgrade is not an option at the moment.
And are you sure you don't have memory issues?
Slow reads can be a hint that SQL has to read too many data from the disk instead of having it in the DBcache.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
I don't think its memory as I can be the only one in the system - memory is flat-lined, delete a GL account and read queues go through the roof. (however introducing the additional keys have fixed that section).
Can I run a 4.0 SP2 database with a NAV2009 client (classic)?
RIS Plus, LLC
I'm trying to think of other areas of the system that may need to be optimized when querying Analysis Views. As update on post is now disabled - are there additional areas I should check?
But you must first test all. It is possible some things must be changed (but it happens rarely).
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!