Go to last record in G/L entries form takes forever

herrlutzherrlutz Member Posts: 58
Hello...

we have just updated our system technically from 3.70 to NAV2009, SQL 2008, but still using the 3.70B database version.
We made it to solve main performance issues by hinting optimized keys and using FINDSET etc. instead of FIND('-') and are quite satisfied generally.
But we struggle with following phenomenon:
When we "Drill Down" in G/L Account (Chart of Account) on "Balance" the G/L Entries Form opens. Coming from Navision Standard, the selected key is "G/L Account", "Posting Date" is active. If we have an account with a lot of entries (f.i. Inventory Account) and simply click on "Go to last record" in the menu bar it takes nearly 5 minutes unless Navision executes it. In meantime the client is locked.

The Client Monitor gives following info. If i understand it correctly, for whatever reason the SQL server did not take the index "G/L Account", "Posting Date" but took the primary key which is the clustered index.

SELECT * FROM "TEST USA 29_03_10$G_L Entry" WHERE (("G_L Account No_"='12001')) ORDER BY "G_L Account No_" DESC,"Posting Date" DESC,"Entry No_" DESC OPTION (OPTIMIZE FOR UNKNOWN)
Parallelism[2,1];Sort[3,2];Clustered Index Scan(TEST USA 29_03_10$G_L Entry$0)[4,3]
Entry No.
+
Entry No.='11815020'

Elapsed time 358000

Just to mention it, i was the only one on the database and no other activities ran.

Does anyone have an idea that might help us?
Happy Easter!
Alex

Comments

  • ara3nara3n Member Posts: 9,256
    I'm guessing it's related to either not rebuilding indexes or updating statistics on sql server.
    It could be also related to executable build no you are using, which would require exe upgrade to 2009 sp1.
    You can also disable parameter sniffing read this thread.
    http://dynamicsuser.net/forums/p/31905/ ... spx#167319
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
  • herrlutzherrlutz Member Posts: 58
    thank you Rashed.
    We already use NAV2009 SP1, Build 29626. Is there a new Build available?

    The statistics update runs each night, once a week we do index rebuild.
  • ara3nara3n Member Posts: 9,256
    could you provide your hardware spec? DB size and number of GL entries?
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
  • herrlutzherrlutz Member Posts: 58
    Rashed,

    the DB size is 73 GB
    there are 11.874.746 records

    we run an ESX server connected to a SAN with ISCSI and 4 CPUs (If you think it´ll help i´ll clarify the exact config on monday)
    SQL Server has 6GB RAM reserved
  • ara3nara3n Member Posts: 9,256
    6 GB of ram is way too low. I suggest to increase that to 32 gig. I'm assuming it's 64 bit OS.
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
  • herrlutzherrlutz Member Posts: 58
    I´ve read the recommendation for more RAM also, but coming from our monitors we think that 6GB are ok.
    Our SQL monitores show, that Navision frequently only uses 40-60 % of that 6GB throughout business hours.
    and the other performance monitores we use show that the system performance is ok in general, and we have less lockings.
    Also the test i described above i did when nothing else was working on that database.

    So why would it be an improvement to have more RAM?

    regards
    Alex
  • herrlutzherrlutz Member Posts: 58
    ... and yes, it´s Windows 2008 64Bit OS.
  • David_SingletonDavid_Singleton Member Posts: 5,479
    You should upgrade to 29766 to resolve the Dynamic cursors issue.

    http://dynamicsuser.net/blogs/waldo/arc ... 9-sp1.aspx

    OH and as Rashed says, you need more RAM.

    What is your disk configuration?
    David Singleton
  • strykstryk Member Posts: 645
    Hi Alex,

    well, I agree that 6GB is way too less RAM, but that's just worsening the symptom. The general problem with T17 G/L Entry (and other "Entry" tables) is, that the "Clustered Index" is on basis of the PK "Entry No.". But the major criteria for querying data from the table is "G/L Account No." - as in your query example. With the original CI all affected records might be distributed over the whole table, at many different "physical positions". SQL Server will not "seek" each single record, as this would cause even higher I/O, thus it starts "scanning" in many (most?) cases.

    For "G/L Entry" a better CI might fix the problem (so far I've got a 95% succes rate here):
    Take this NAV "Key" "G/L Account No.", "Posting Date" ("SQL Index" should be blank or "G/L Account No.", "Posting Date", "Entry No.") and flag this one as "Clustered" (unticking the originial CI).
    Caution: this is definitely an offline operation and might cause huge load on the server!

    This new CI suits better for most of the queries on T17, causing better execution plans, less "Reads" and - maybe - somewhat fewer blocks.

    Should work, I'm curious to get your results.
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • herrlutzherrlutz Member Posts: 58
    Dear all, thank you very much for your hints.
    I will change the CI first on our test- db as Jörg suggested.
    Also i will try to get Build 29766 from our partner.

    Our discs are as far as i know stored in a fast SAN with Raid 50 - i will ask my colleagues on Tuesday (after Easter holiday) for more info.

    One thing i don´t understand in general.
    Everyone recommends to have more RAM on our SQL Server. But as i wrote above, the SQL server currently does not seem to use even nearly the 6GB it currently has.
    So what are technical arguments to invest in more RAM although? What would it make better under this conditions?

    regards
    Alex
  • David_SingletonDavid_Singleton Member Posts: 5,479
    We had this discussion some time ago with a client. We told them to add more ram, but they wanted a case to justify it. So we did analysis on their ram usage compared lots of things, did test by running giving sql access to different amounts of RAM etc. Took quite some time. In the end they spent MORE money paying us to justify purchasing RAM than the RAM actually cost. :whistle:


    Just ... add ... more ... RAM ... its cheap.
    David Singleton
  • strykstryk Member Posts: 645
    Plenty of RAM is absolutely crucial for SQL Server. All client processes get the data from the cache only, if data is not present in cache, SQL Server has to physically read it from the disks first. PHYSICAL DISK OPERATIONS ARE DRAMATICALLY SLOW!
    Hence: the more RAM, the less physical I/O, the faster processing.

    Whether RAM is sufficient or not depends on the size of data and indexes; the larger the indexes, the more RAM is required.

    An example (simplified and somewhat incorrect):
    If SQL Server decides to perform a "Clustered Index Scan" (as in your case) this means it partially or competely reads the whole table.
    Assuming a complete scan - whats the size of the "Clustered Index" of T17? Well, you could measure, but just simplified:
    11 Mio records x 100 Bytes (assumed record size, cannot look it up yet) = 1.100.000.000 Bytes, ca. 1GB (or about 135.000 "Pages"/"Reads").
    Hence, just this single query could consume 1GB of all 4GB RAM (with 6GB available Windows would allocate 2GB, leaving 4GB net for the SQL Server) - that's 25%.

    Hence, having enough memory will also make the system more "forgiving" to such killer queries.
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
Sign In or Register to comment.