Hello...
we have just updated our system technically from 3.70 to NAV2009, SQL 2008, but still using the 3.70B database version.
We made it to solve main performance issues by hinting optimized keys and using FINDSET etc. instead of FIND('-') and are quite satisfied generally.
But we struggle with following phenomenon:
When we "Drill Down" in G/L Account (Chart of Account) on "Balance" the G/L Entries Form opens. Coming from Navision Standard, the selected key is "G/L Account", "Posting Date" is active. If we have an account with a lot of entries (f.i. Inventory Account) and simply click on "Go to last record" in the menu bar it takes nearly 5 minutes unless Navision executes it. In meantime the client is locked.
The Client Monitor gives following info. If i understand it correctly, for whatever reason the SQL server did not take the index "G/L Account", "Posting Date" but took the primary key which is the clustered index.
SELECT * FROM "TEST USA 29_03_10$G_L Entry" WHERE (("G_L Account No_"='12001')) ORDER BY "G_L Account No_" DESC,"Posting Date" DESC,"Entry No_" DESC OPTION (OPTIMIZE FOR UNKNOWN)
Parallelism[2,1];Sort[3,2];Clustered Index Scan(TEST USA 29_03_10$G_L Entry$0)[4,3]
Entry No.
+
Entry No.='11815020'
Elapsed time 358000
Just to mention it, i was the only one on the database and no other activities ran.
Does anyone have an idea that might help us?
Happy Easter!
Alex
0
Comments
It could be also related to executable build no you are using, which would require exe upgrade to 2009 sp1.
You can also disable parameter sniffing read this thread.
http://dynamicsuser.net/forums/p/31905/ ... spx#167319
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n
We already use NAV2009 SP1, Build 29626. Is there a new Build available?
The statistics update runs each night, once a week we do index rebuild.
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n
the DB size is 73 GB
there are 11.874.746 records
we run an ESX server connected to a SAN with ISCSI and 4 CPUs (If you think it´ll help i´ll clarify the exact config on monday)
SQL Server has 6GB RAM reserved
Independent Consultant/Developer
blog: https://dynamicsuser.net/nav/b/ara3n
Our SQL monitores show, that Navision frequently only uses 40-60 % of that 6GB throughout business hours.
and the other performance monitores we use show that the system performance is ok in general, and we have less lockings.
Also the test i described above i did when nothing else was working on that database.
So why would it be an improvement to have more RAM?
regards
Alex
http://dynamicsuser.net/blogs/waldo/arc ... 9-sp1.aspx
OH and as Rashed says, you need more RAM.
What is your disk configuration?
well, I agree that 6GB is way too less RAM, but that's just worsening the symptom. The general problem with T17 G/L Entry (and other "Entry" tables) is, that the "Clustered Index" is on basis of the PK "Entry No.". But the major criteria for querying data from the table is "G/L Account No." - as in your query example. With the original CI all affected records might be distributed over the whole table, at many different "physical positions". SQL Server will not "seek" each single record, as this would cause even higher I/O, thus it starts "scanning" in many (most?) cases.
For "G/L Entry" a better CI might fix the problem (so far I've got a 95% succes rate here):
Take this NAV "Key" "G/L Account No.", "Posting Date" ("SQL Index" should be blank or "G/L Account No.", "Posting Date", "Entry No.") and flag this one as "Clustered" (unticking the originial CI).
Caution: this is definitely an offline operation and might cause huge load on the server!
This new CI suits better for most of the queries on T17, causing better execution plans, less "Reads" and - maybe - somewhat fewer blocks.
Should work, I'm curious to get your results.
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
I will change the CI first on our test- db as Jörg suggested.
Also i will try to get Build 29766 from our partner.
Our discs are as far as i know stored in a fast SAN with Raid 50 - i will ask my colleagues on Tuesday (after Easter holiday) for more info.
One thing i don´t understand in general.
Everyone recommends to have more RAM on our SQL Server. But as i wrote above, the SQL server currently does not seem to use even nearly the 6GB it currently has.
So what are technical arguments to invest in more RAM although? What would it make better under this conditions?
regards
Alex
Just ... add ... more ... RAM ... its cheap.
Hence: the more RAM, the less physical I/O, the faster processing.
Whether RAM is sufficient or not depends on the size of data and indexes; the larger the indexes, the more RAM is required.
An example (simplified and somewhat incorrect):
If SQL Server decides to perform a "Clustered Index Scan" (as in your case) this means it partially or competely reads the whole table.
Assuming a complete scan - whats the size of the "Clustered Index" of T17? Well, you could measure, but just simplified:
11 Mio records x 100 Bytes (assumed record size, cannot look it up yet) = 1.100.000.000 Bytes, ca. 1GB (or about 135.000 "Pages"/"Reads").
Hence, just this single query could consume 1GB of all 4GB RAM (with 6GB available Windows would allocate 2GB, leaving 4GB net for the SQL Server) - that's 25%.
Hence, having enough memory will also make the system more "forgiving" to such killer queries.
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool