Poor Navision Read Performance

markodrummermarkodrummer Member Posts: 22
Hi Everyone - over the past week I have seen the performance of our Navision system diminish to the point where it is sometimes unusable. I have done a lot of digging, testing and performance tuning in attempt to solve these issues with no success. I'm getting frustrated, and of course so is the user base. Here is some setups and background to what I've tried/noticed.

SETUP
- Navision 4.0 SP2
- SQL Server 2000 Std running on Windows 2003 Std. R2
- Database and Log files on a FC SAN on seperate LUNs
- Database Files - RAID 1+0 on 8 discs
- Log Files - RAID 1+0 on 4 discs
- No recent hardware changes, code changes, system changes etc.
- Approx. 30 concurrent users

==================

1. Database & Log files have Plenty of Space (auto-growth is not an issue)
2. Physical Drives have Plenty of Space
3. Have run a FULL Navision based Optimization
4. Have updated all Index Statistics (via SQL)
5. AntiVirus and Monitoring (Argent) have been disabled (just to rule them out)
6. CHECKDB shows no issues with consistency
7. SAN is showing no issues - no degradation of disc failure (no SMART warnings)

Reads seem to be the problem. I interested a dummy GL Account are normal speeds. However Deleting the GL Account takes upwards of 5-10 minutes. Read Queues on the Data drive go through the ROOF.

At first glance I assumed it was an Index issue - however after rebuilding all of them - I'm not so sure.

Any help would be appreciated!

Comments

  • ara3nara3n Member Posts: 9,256
    what is the size of the db and what is size of RAM on the sql box?

    It sounds to me you don't have enough RAM.
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
  • DenSterDenSter Member Posts: 8,305
    Simply rebulding indexes is not going to give you any magical performance improvement. The queries have to be done in such a way that they actually use the RIGHT indexes. NAV 4.0 SP2 (what version exe are you using?) was not a very well performing version from a SQL Server point of view.

    Open client monitor, under option check 'show sql statement' and 'show execution plan and sql index'. Start client monitor and delete another account. When it's done stop the client monitor. Now you can check individual SQL Statements and their duration. I don't know if you still have access to a partner license but that way you could use the enhanced client monitor and export it to excel. If I were to guess you will find where the execution plan shows clustered index scans where you really want to have index seeks. Those scans cause the system to browse the entire table rather than only retrieving relevant record sets. As a result, you have countless unnecessary reads, which will cause your storage system go through the roof.

    In the OnDelete trigger of the G/L Account, there is a ton of code that checks various tables. Make sure that this code uses proper (i.e. existing and enabled) sort order. Find keys that match the filter criteria, and if they don't exist, create them.
  • David_SingletonDavid_Singleton Member Posts: 5,479
    DenSter wrote:
    I don't know if you still have access to a partner license but that way you could use the enhanced client monitor ...

    :?: If they are a partner then why would they no lnger have access, if they are not a partner why would they ever have had access?

    :-k
    David Singleton
  • markodrummermarkodrummer Member Posts: 22
    ara3n wrote:
    what is the size of the db and what is size of RAM on the sql box?

    It sounds to me you don't have enough RAM.

    DB is about 30GB - 8GB RAM on the box.
    DenSter wrote:
    Simply rebulding indexes is not going to give you any magical performance improvement. The queries have to be done in such a way that they actually use the RIGHT indexes. NAV 4.0 SP2 (what version exe are you using?) was not a very well performing version from a SQL Server point of view.

    Open client monitor, under option check 'show sql statement' and 'show execution plan and sql index'. Start client monitor and delete another account. When it's done stop the client monitor. Now you can check individual SQL Statements and their duration. I don't know if you still have access to a partner license but that way you could use the enhanced client monitor and export it to excel. If I were to guess you will find where the execution plan shows clustered index scans where you really want to have index seeks. Those scans cause the system to browse the entire table rather than only retrieving relevant record sets. As a result, you have countless unnecessary reads, which will cause your storage system go through the roof.

    In the OnDelete trigger of the G/L Account, there is a ton of code that checks various tables. Make sure that this code uses proper (i.e. existing and enabled) sort order. Find keys that match the filter criteria, and if they don't exist, create them.

    Thanks for the Info. I'll give that a shot. What seems strange to me is this happened right out of nowhere - performance has always been speedy throughout the system - then one day..boom.

    I'll post results. Thx for the help. ](*,)
  • markodrummermarkodrummer Member Posts: 22
    Denster - I think you may have it.

    I did as you recommended and ran the Client Monitor - and it did indeed show a lot of DELETEALL statements using a clustered index scan rather than seek. Is this because (as you may have mentioned) that the onDelete() code is not using the proper key/or any key to find the data and resulting back to use the default clustered index.

    See the attached images - they show the times on a few of the statements.



    In checking the OnDelete() code in the GL Account table i see the following:


    MoveEntries.MoveGLEntries(Rec);

    GLBudgetEntry.SETCURRENTKEY("Budget Name","G/L Account No.");
    GLBudgetEntry.SETRANGE("G/L Account No.","No.");
    GLBudgetEntry.DELETEALL(TRUE);

    CommentLine.SETRANGE("Table Name",CommentLine."Table Name"::"G/L Account");
    CommentLine.SETRANGE("No.","No.");
    CommentLine.DELETEALL;

    ExtTextHeader.SETRANGE("Table Name",ExtTextHeader."Table Name"::"G/L Account");
    ExtTextHeader.SETRANGE("No.","No.");
    ExtTextHeader.DELETEALL(TRUE);

    AnalysisViewEntry.SETRANGE("G/L Account No.","No.");
    AnalysisViewEntry.DELETEALL;

    AnalysisViewBudgetEntry.SETRANGE("G/L Account No.","No.");
    AnalysisViewBudgetEntry.DELETEALL;

    DimMgt.DeleteDefaultDim(DATABASE::"G/L Account","No.");


    Most of which there is NO key set at all. For the Analysis View Entry table, there is only 1 key, which is the Clustered Index. Looks like there should be an additional key added to the table, just for GL Account No., they alter the code to use it. Mimic this logic to the other bad offenders.

    Still seems strange that this would just start happening out of the blue. Maybe it hit its tipping point?

    Let me know if this is the correct approach. Many thanks!
  • bbrownbbrown Member Posts: 3,268
    You might also implement an ISEMPTY call to determine if there are any records to be deleted.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,305
    Maybe you recently did an exe upgrade? Are you still on 4.0 SP2 executables?

    I would definately try creating a key on "G/L Account No." and do a SETCURRENTKEY in the OnDelete trigger, that should make a difference.
    bbrown wrote:
    You might also implement an ISEMPTY call to determine if there are any records to be deleted.
    Wouldn't make a difference I don't think because it would still need to do a clustered index scan to determine whether the filtered set is empty. Come to think of it, if it DOES find something, you have to scan the table more than once, so I would not go there.
  • krikikriki Member, Moderator Posts: 9,110
    DB is about 30GB - 8GB RAM on the box.
    Those 8GB don't do any good. Your SQL2000 std is only using 1.7GB of dbcache (I suppose you have 32bit).

    If you want to use those 8GB as DBcache you need everything (hardware, Windows, SQL) on 64 bit. And best also upgrade SQL to SQL2008 (R2). And NAV to NAV200)SP1.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • markodrummermarkodrummer Member Posts: 22
    DenSter wrote:
    Maybe you recently did an exe upgrade? Are you still on 4.0 SP2 executables?
    I would definately try creating a key on "G/L Account No." and do a SETCURRENTKEY in the OnDelete trigger, that should make a difference.

    No recent upgrades, still on the 4.0 SP2 executables.

    I made the changes the OnDelete() trigger and it sped up the queries dramatically. Now its to mave forward on other areas - however, i wouldnt be surprised if much of the slow-down was centered around the Analysis Views. I have removed the Update on Post flag from them for now. I'm sure there is plenty of cleanup to be done in those tables anyway. Appreciate the help.
    kriki wrote:
    DB is about 30GB - 8GB RAM on the box.
    Those 8GB don't do any good. Your SQL2000 std is only using 1.7GB of dbcache (I suppose you have 32bit).

    If you want to use those 8GB as DBcache you need everything (hardware, Windows, SQL) on 64 bit. And best also upgrade SQL to SQL2008 (R2). And NAV to NAV200)SP1.

    Yes that's true - however i've never seen a memory issue on the machine. The plan is to at least upgrade the Executable to 5.1 and move the DB to 2005 or 2008 - full upgrade is not an option at the moment.
  • krikikriki Member, Moderator Posts: 9,110
    I would go for SQL2008 (Preferably 64 bit (if the OS is already 64 bit)) and NAV2009SP1!

    And are you sure you don't have memory issues?
    Slow reads can be a hint that SQL has to read too many data from the disk instead of having it in the DBcache.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • markodrummermarkodrummer Member Posts: 22
    kriki wrote:
    I would go for SQL2008 (Preferably 64 bit (if the OS is already 64 bit)) and NAV2009SP1!

    And are you sure you don't have memory issues?
    Slow reads can be a hint that SQL has to read too many data from the disk instead of having it in the DBcache.

    I don't think its memory as I can be the only one in the system - memory is flat-lined, delete a GL account and read queues go through the roof. (however introducing the additional keys have fixed that section).

    Can I run a 4.0 SP2 database with a NAV2009 client (classic)?
  • DenSterDenSter Member Posts: 8,305
    i wouldnt be surprised if much of the slow-down was centered around the Analysis Views. I have removed the Update on Post flag from them for now
    Yes that is a system hog for sure, and should never be checked. I don't know whether that has much to do with system speed while deleting G/L Accounts though.
  • markodrummermarkodrummer Member Posts: 22
    DenSter wrote:
    Yes that is a system hog for sure, and should never be checked. I don't know whether that has much to do with system speed while deleting G/L Accounts though.
    Yes - i'm going to leave them off. I believe they were off...someone must have turned them on. (of course I don't have auditing on the Analysis View Table).

    I'm trying to think of other areas of the system that may need to be optimized when querying Analysis Views. As update on post is now disabled - are there additional areas I should check?
  • krikikriki Member, Moderator Posts: 9,110
    Can I run a 4.0 SP2 database with a NAV2009 client (classic)?
    Yes.
    But you must first test all. It is possible some things must be changed (but it happens rarely).
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


Sign In or Register to comment.