Hi All
Nav 2009 Classic (2.01 Objects)
SQL 2008
I've managed to cure a whole bunch of locking issues in our database by investigation which keys are used when returning data therefore how much data is being locked
Yesterday after no updates having been applied to FOBs locking and performance problems have reared their ugly heads big time. Through monitoring SQL queries this seems to be a very expensive, time consuming and therefore I assume is the problematic query when posting transfer journals
(@P1 varchar(20),@P2 varchar(10),@P3 tinyint,@P4 tinyint,@P5 varchar(10),@P6 varchar(10),@P7 varchar(20),@P8 datetime,@P9 varchar(30))
SELECT TOP 1 NULL FROM "Dynamics Nav 2009 LIVE"."dbo"."aaaa$Item Ledger Entry" WITH (UPDLOCK)
WHERE (("Item No_"=@P1)) AND (("Variant Code"=@P2)) AND (("Open"=@P3)) AND (("Positive"=@P4)) AND (("Location Code"=@P5)) AND
(("Bin Code"=@P6)) AND (("Lot No_"=@P7)) AND (("Purchase Date"=@P8)) AND (("Serial No_"=@P9))
I've noticed this strangely does not have an ORDER BY clause thus would table scan and lock a majority the table (bad .. very bad). I can't see any RESETs in the posting routine and can't work out why the locking has returned. Any ideas how to track this down?
Why would
Life is for enjoying ... if you find yourself frowning you're doing something wrong
Answers
To help solve it, I would start by adding a SETCURRENTKEY for a key that matches the filter criteria.
<edit>
Edit for purposes of people that come back in here to research, because this needs a bit more information. My plan when I posted this reply was to take Toddy through this step by step. I wanted to have him check the code, and either find that there already is a SETCURRENTKEY, or add one and find there's STILL no ORDER BY clause. Of course some other people didn't know where I was going and started correcting me for making no sense. Just to make sure this is complete I added this information.
For most queries, when there is no ORDER BY statement, that means that the C/AL code does not have a SETCURRENTKEY. This works for FIND, FINDFIRST, FINDLAST, FINDSET, but not for ISEMPTY. When there's an ISEMPTY command in C/AL, even when you DO have a SETCURRENTKEY command, the SQL Query will not have an ORDER BY command. So, for ISEMPTY commands, it doesn't matter if you have a SETCURRENTKEY, because it will never end up in the query. You'll have to make sure though, so always check the code and verify the code that you're working with.
In that case you still need to make sure there's a good key for the filter criteria, and when you have a key that has all the fields in your filter criteria, that is what is called a 'covering index'. There's a link in one of the replies that further explains that concept.
</edit>
RIS Plus, LLC
Thanks for the reply, can I pick it apart a little ...
Got this, that's the conclusion I came to.
Would the LOCKTABLE suggest that the primary key is to be used overriding the suggested key i.e in this case the Entry No.?
Does this mean the result from the SQL query is an empty recordset?
I've been trying to hunt the query for a while now ... gonna keep digging, thanks for the pointers
Therefore it does not make sense to have an ORDER BY or sETCURRENTKEY
Try to make a covering index and your query will fly.
http://dynamicsuser.net/blogs/mark_brum ... dexes.aspx
Your goal is to figure out where the SQL query is generated, and you use the pieces in the query to determine what the C/AL code looks like.
No ORDER BY usually means there is no SETCURRENTKEY command, for FIND commands. The exception is ISEMPTY, which won't have an ORDER BY even when there is a SETCURRENTKEY command. This makes sense here, so SELECT TOP NULL with no ORDER BY means ISEMPTY, with or without a SETCURRENTKEY.
UPDLOCK means there has to be a LOCKTABLE command. So if you find a block of code that wants to retrieve data from that table, but there is no LOCKTABLE command in there, then you know it's NOT the right block of C/AL code.
SELECT TOP NULL means there has to be an ISEMPTY. So if you find a block of code without a SETCURRENTKEY, with filters on all the right fields, with a LOCKTABLE command, but there is a FINDSET command, then you KNOW that it is NOT the right piece of C/AL code, because a FINDSET results in a SELECT * query
With those hints is simply a matter of detective work and work your way through all of the code in this user's process. If you want to know which objects to look in, you run code coverage while the user is running the process, and you simply open all of those objects from the object designer to find the culprit.
Sounds like fun?
RIS Plus, LLC
Not saying that covered indexes would not help, but the right SETCURRENTKEY will most definitely help.
RIS Plus, LLC
Your article and reply also make sense as to what the ISEMPTY is translated to in SQL (nice), I've started to use ISEMPTY as it seems to have helped performance and locking problems.
What are the double braces around the field names?
RIS Plus, LLC
RIS Plus, LLC
Avoiding tablescans is always - without doubt - what we want, however adding an ORDER BY clause to a SQL statement is no gurantee that you won't get a table scan.
Filtering and ordering is two completely different things and SQL server might want to prioritize the "filtering" first and do a simple sort at the end to satisfy the orderby requirement. The reverse is also an option, seeing the ordering matching an index with reasonable statictics could allow and execution plan where the "order by" index was used first and filtering applied to the records read.
The important think here is to realize, that there is no "simple answer" and no "SQL server will always...", it all depends on the query itself, SQL indexes created, statistical information but also the SQL server version (this has changed over time)...
SQL server calculates the most optimal execution plan taking into consideration the order by, but if you are filtering on one set of items (global filters, security filters and normal filters) but ordering on something else you have no gurantee of which index gets used.
This is VERY different from the classic database, where everything was dictated by current key.
As for the "SELECT TOP 1 NULL" statement, this is - as mentioned - used to satisfy the need to know if one of more records exists as efficiently as possible (no need to return data, just IF they exist).
What we really would want would be a "SELECT TOP 1 COUNT(*)" but performance measurements proved that the "SELECT TOP 1 NULL" turned out to be faster, which is why we are using this construct.
The result of the query will be: If records does not exist, an empty resultset. If records exists (no matter how many), we get a single row with a single constant field containing NULL. It would make very little sense to add an "order by" here - The SQL executionplan engine will in all likelyhood ignore any "order by" when the select list does not contain any fields from the table on which the order by operates.
Most other statements will always have an "order by" clause added which will correspond to the key selected by SETCURRENTKEY - you can't avoid that. Even if you don't set a key, the primary key will be the default selected one.
Given that the correct key is vital to the classic database and that it will add to the readability of the code, setting a key might actually be the right thing to do although - strictly speaking - is it unnecessary in this scenario...
CTO, Architect - Microsoft Dynamics NAV
Since ISEMPTY did not exist in 2.01, it means it is code you have added, NOT standard code, so it should be pretty easy to find.
Of course, there's never a cookiecutter answer. However, since we only get very limited information on these forums you have to start somewhere, and based on the particular question we try to give people a little push in the back, help them on their way to a resolution. You can't reply to these topics with full dissertations on performance problems.
RIS Plus, LLC
CTO, Architect - Microsoft Dynamics NAV
RIS Plus, LLC
The clue came when examining the Query Plan, this showed the Primary key estimated at 50% cost and the key it should have been using estimated at 50% cost, I guess this means SQL selected the primary key to perform the search based on the stats it had.
As I'm are fairly new to supporting systems at this level in this environment every day is a school day, we don't pay a maintenance fee to our third party so the solution isn't always easy to come by using google. Thankfully there's always help on here.
If that fixes your problem then this is another example of how all the discussion about various performance related replies in this topic were off the mark, because we didn't cover the essentials.
RIS Plus, LLC
I ran the following but for a number of tables How does that differ from .. Also the database is pretty big 75 Gig, should I be worried about the time it would take to create and update the statistics with full scan?
The first time that you run it, it'll probably run for quite a while on your database, just because the statistics are all out of date. On a 75GB database though I would be surprised if it will take more than 15-20 minutes after that, because it will only actually do something if they NEED to be updated.
Just schedule it to run late at night or in the middle of the night.
RIS Plus, LLC
Your query is on the Item Ledger Entry and it's almost impossible that the distribution of data accross the table changes dramatically during a day or even weeks.
In most NAV system you don't need to update the statistics very often. Optimising the indexes is more important and this also updates the statistics.
I would hire a specialist to analyse your system and do reccomendations. This way you'll learn faster and spend your time on other things rather than guessing on google and listening to confusing answers.
Like Thomas says, SQL is not an engine that "always" does this or that and there are tons of versions and parameters.
We've been on SQL since Oct 11 with no maintenance (update stats etc) being ran on the database.
The Item Ledger Entry gets a lot of action on our system with lots of stock constantly being moved around the business, plus this table has many keys. So am I wrong in assuming (see earlier thread) that the 50/50% cost on key selection pointed to an issue with the statistics on that table?
It would be nice to have an expert look at the system but when you submit the request for 2 days consultancy and it's turned down there's little you can do but turn to google.
By default SQL Server updates the statistics automatically. This is unless you turn it off.
You can turn it off, but only if you replace it with a manual process. This makes kind of sense since an ERP system does not have constantly changing data distribution.
An expert would analyse the entire system and the AutoUpdateStatistics and AutoCreateStatistics settings are on the checklist.
I would go back to my boss if I were you and ask him if he wants to risk his business with this.
Both of those are set to True, however when I ran the sp_updatestats and sp_createstats the result in the query window showed stats updated for a whole load of fields.
It seems more than a coincidence that in running these routines the locking stopped.
8)
Is it the whole answer? Absolutely not, chances are that you will have more performance problems, and when they return, it's NOT going to be because of statistics.
To begin with, you will also need index maintenance (complete reindex of the entire database on a weekly basis, with a fill factor of 85% is a good start). Maintenance on stats and indexes is the bare minimum of what you need. You will also need to make sure that your infrastructure is up to standard. You should also tune the tables based on actual index usage. You will also have to deal with the programming in a variety of objects, there is a LOT of room for improvement there.
In my real life job, every time we work on performance problems, in addition to trying to address the immediate issue at hand, we ask the customer to let us do a full system review, to make sure their system is set up properly, and that their database maintenance is up to standard. We make recommendations to improve the situation, and we propose a plan going forward for addressing performance problems (tools, index tuning, code review, etcetera).
This is Mibuso though, we can't spend this amount of time on a comprehensive performance tuning strategy every time a simple question comes in. Sometimes, simple things fix simple issues. We know there are more problems down the road, but we choose to address those when we get to them.
It's unfortunate that some people feel the need to make other people look bad to make themselves look better, but that doesn't mean they can't make an excellent point. Getting an expert involved is excellent advise, and you should definitely consider it. If you do it before the real problems start (and they will), you'll be ahead of everyone else, and you'll look like a star.
RIS Plus, LLC
Having the help and advice on this thread from DenSter has resolved this issue at hand. I don't believe for a minute this will be the last of my problems however resolving them is challenging and fun, and when they occur and I can't resolve them I hope to converse again.
I'd like to know more about this phrase if you have the time.
Give it a name, such as "NAV Database Reindexing". If you plan to have multiplpe steps, you can have multiple schedules for each step, but let's leave this one for reindexing only, and so you leave it on Single Schedule. Hit the 'Change' button to actually create the schedule, I'd say put it on Sunday evening or something, a time that you know the database won't be used for a few (4-8) hours. Once the schedule is in, click Next.
You could get fancy and do a reorg in certain cases, but for now let's keep it simple so just check the "Rebuild Index" option and click Next until you get to the page that says "Define Rebuild Index Task". Drop down the Databases and select your NAV database. Leave Object to 'Tables and Views', and leave the Selection blank.
If you have your default fill factor set up properly, you can leave it to default free space. I like to set this to 15% free space to start out with, then click Next. If you want a report on the job, enter an existing path. I don't usually do this so I usually uncheck the report option, click Next.
Click Finish -> done. Run the job manually to measure how long it takes. If there's nobody in the system this evening or overnight, and there are no backup jobs scheduled or anything like that, you could start it this evening late, and check in tomorrow morning to see if it completed. If it is still running you can stop the job. Just make sure you know how long it takes, so that you can schedule it according to the available time frame.
RIS Plus, LLC