I found a very interesting blog about this that I wanted to share with you. For search reasons, I copy/pasted the text in this thread.
Link:http://msdnrss.thecoderblogs.com/2007/09/20/is-sql2000-more-clever-than-sql2005/Text:
I have seen a number of (good) arguments that some issues with Dynamics NAV on SQL2005 are caused by a bad query optimizer in SQL2005, and that the SQL teams need to fix this. After all, some issues we see on SQL2005 did not exist in SQL2000. This blog tries to argue that SQL2005 is maybe more clever than SQL2000, and tries to give some input into the discussions about whether SQL2000 is more clever than SQL2005, and whether there is anything to fix in SQL2005 regarding this specific issue.
The scenario belows is the situation where the reuse of a cached query plan causes a Dynamics NAV client to hang while it is browsing forms.
Here is a repro scenario which will show why Dynamics NAV ends up causing a clustered index scan on SQL2005, while the same scenario on SQL2000 did not cause any such scans. It is based on a W1 5.00 demo database, and it requires a Solution Developer's license to run it. Run the steps on a SQL2000 and SQL2005 database and you will see where the differences are between these two platforms:
1. Create a new G/L Account, No 1105
2. Create 50.000 new records in table 17. For this purpose, it doesn't matter if you post these entries or just create a codeunit to insert the records.
3. Run table 17 from Object Designer, and change the G/L Account No. to 1105 for the first 3, and the last 3 entries
4. On SQL Server, update statistics on this table:
update statistics [CRONUS International Ltd_$G_L Entry]
5. Run Dynamics NAV with Maximized forms.
6. In Dynamics NAV, go to "Chart of Accounts" and drill down on the new account 1105 and you should see 6 entries. Make sure to place the cursor on the first entry. Then close the drill-down to go back to the "Chart of Accounts".
7. On SQL Server, run DBCC FREEPROCCACHE. This will clear out any cached query plans.
8. Start a profiler trace - include the following events (on top of the default ones)
On SQL2005: Performance:Showplan Text, on SQL2000: Performance:Execution Plan
Stored Procedures:SP:CacheHit
Stored Procedures:SP:CacheInsert
9. In Navision, drill down on account 1105. Then move the cursor with arrow-down, until you get to the last entry. Then move back up to the top again with arrow-up.
10. Stop the profiler trace.
On SQL2005, you should see one of the last entries causing a relatively large number of reads. In my tests 2079 reads. This is the offending query. The same query on SQL2000 causes much fewer reads. In my tests 126 reads.
The query looks like this:
SELECT * FROM "W1500"."dbo"."CRONUS International Ltd_$G_L Entry" WHERE (("G_L Account No_"=
@P1)) AND "G_L Account No_"=
@P2 AND "Posting Date"=
@P3 AND "Entry No_"<
@P4 ORDER BY "G_L Account No_" DESC,"Posting Date" DESC,"Entry No_" DESC ',
@p3 output,
@p4 output,
@p5 output,N'
@P1 varchar(20),
@P2 varchar(20),
@P3 datetime,
@P4 int','1108','1108',''2007-12-31 00:00:00:000'',52761
Notice that the last parameter value is 52761. So the part of the query to focus on here, in fact reads:
WHERE "Entry No_" < 52761
Then take a look at the execution plan. SQL 2005 uses the index [CRONUS International Ltd_$G_L Entry$0], which is the clustered index ("Entry No_"). SQL2000 uses the index [CRONUS International Ltd_$G_L Entry].[$1], which is the index which begins with "G_L Account No_". So based on this query it is not strange that SQL2005's plan is causing many more reads that SQL2000's plan.
Here is an important point to make: Neither SQL2000 or SQL2005 compiled the query plan for this query. You can see by the presense of SP:CacheHit events in the profiler trace, that the plan was taken from the plan cache. So in order to find out why the two versions of SQL makes different plans, we need to go to the place where the plan was made.
Go to the SP:CacheHit event and look at the data. Then go backwards in the trace until you find the SP:CacheInsert event with the same data. This is the place where the query plan was made. The query in this place looks like this:
SELECT * FROM "W1500"."dbo"."CRONUS International Ltd_$G_L Entry" WHERE (("G_L Account No_"=
@P1)) AND "G_L Account No_"=
@P2 AND "Posting Date"=
@P3 AND "Entry No_"<
@P4 ORDER BY "G_L Account No_" DESC,"Posting Date" DESC,"Entry No_" DESC ',
@p3 output,
@p4 output,
@p5 output,N'
@P1 varchar(20),
@P2 varchar(20),
@P3 datetime,
@P4 int','1108','1108',''2006-12-31 23:59:59:000'',1
This time, the last parameter value is 1 (not 52761)! So this time, the part of the query to focus on is:
WHERE "Entry No_" < 1
Remember that "Entry No_" is also the clustered index.
So here is the question: What is the best possible query plan for this query? And I think the answer is easy for this scenario: Use the clustered index to "scan" this one record! The number of Reads in the trace should also confirm this. In my tests, SQL2005 did 21 reads. SQL2000 did 245 Reads.
So in this case, SQL2005 makes a better plan than SQL2000!
The way that query plans are cached and reused has not changed between SQL2000 and 2005. The following points are valid for both versions:
1. When a query plan is designed, SQL will take the parameter values into consideration (In this example, whether the last parameter is 1 or 52761). This is also called parameter sniffing.
2. When a query plan is reused from cache, the parameter values are NOT taken into consideration. The Query that the plan is valid for is converted into a hash-value. SQL simply looks in the plan cache if a plan exists for that hash-value, and then reuses the plan if there is. If SQL also had to revalidate the plan against the current parameter values, then this would to some extend negate the whole purpose of reusing cached plans (performance).
3. SQL's query optimizer does not have any kind of risk-assessment when it designs a query plan. There are no mechanisms in place to consider "If I put this plan into cache, and it was reused with other parameters, what is the potential damage?"
These behaviours are fundamental to current and previous version of SQL, and most likely to future versions as well.
So, for this scenario we can see that:
- When the plan was made, SQL2005 made the most optimized plan.
- The behaviour of caching plans and reusing them are the same on both SQL2000 and SQL2005.
Comments
http://blogs.msdn.com/microsoft_dynamic ... l2005.aspx
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
SQL2005 makes better plans because it takes parameter values into consideration, but ignores parameter values when caching. Right?
No wonder NAV performance is unstable in this situation… ](*,)
Should SQL2005 update the caching algorithms to ensure it includes the conditions a plan was made with? Like grouping the selectivity of each index related to the where-clause.
But this only makes sense if the gathering of index stats is “cheaper” than figuring out how the plan should look based on the index stats. And this I doubt, which makes it impossible to improve.
What a pessimistic outlook on a Friday afternoon.
PS: But I don’t understand why the “WHERE "Entry No_" < xx” is included in the query at all. I know it wouldn’t solve that many problems, but I just don’t see any reason for it.
Peter
The way the optimiser works in sql2005 is different, I figure that's why we see such differences, it's not a bug, but sadly it causes problems for applications - the fact that in this case it's a microsoft application is somewhat ironic. I've finally found out that my issues with the TokenAndPermUserStore are actually caused by Navision ( or attempts to tune navision by microsoft ) top cause is the guide plans forcing the recompiles, second cause is ad-hoc queries which don't parameterise correctly.
We know what is wrong with the application (NAV) and how to work around it. (Index tuning and code optimalisation) and what to do when all else fails (Index hint).
But what is the real solution. Does NAV need redesign or is SQL just not the optimal database for NAV. Maybe someone needs to look at the good old native database again.
Now that the data sizes are so much large for mid-size companies running Navision, Axapta, Great Plains and their compettitors. it is time for Micrsoft to pull together a team to analyze performance problems for those packages and redesign SQL Server to work better.
One interesting addition for SQL Server 2008 is the date type field which holds only a date. Since accounting systems run by dates and not datetimes, it is strange that it took the Microsoft SQL Server group to realize this. I wonder if they finally talked to the Navision database developers.
They also added a time only field. What's next = flowfield and flowfilter support?
Navision can definitely make some changes to support SQL Server better - like using schema for company names instead of company$.
Maybe when they get over the shock of NAV 5.1, they will move on with better SQL Server integration.
http://mibuso.com/blogs/davidmachanick/
I wouldn't want to go through the same bad experience that I went with 2K5.
We had 3 service packs and 6 updates.
There were several updates between each service pack as well.