Actually, the thing I was trying to get across was that you should use some of the available diagnostic tools to evaluate what the problem really is before assuming what it is.
That's true, in the sense that it's quite possible that it's not the indexes and the SIFT tables are what causing the truly unbearable problems. I just assumed they are because it's in the Convergence keynotes in the downloads section.
Well, right, maybe I should start with a definition what "performance problem" means. It's not that posting a journal takes ten seconds per line - who the hell cares about that. At least in that case the system is working and therefore nobody can blame you. The real problems are when an card or a drill-down opens in five or ten minutes - the well-known "white screen of death" when the client turns white and freezes. Because from an end-user viewpoint it's clearly an error. It's quite possible that these are rather locking issues than simple index/sift overhead and therefore all these index optimalization strategies I'm trying to define are quite useless.
Just a question - these index/SIFT solutions are for those cases when the system is "reliably slow", i.e. something is always having the same low speed? Or can they help in those cases to - and these are the real problems for most users - when something is quick 9 out of 10 times but produces the white screen of death in 10th time and then after five minutes it suddenly pulls itself together and finishes the transaction?
When you notice in your profiler that something takes too much reads or writes or "duration", you can investigate it. If it's a "select", you can play with your indexes, if it's an update/delete/insert, you can play with the SIFTs ... .
When the system hangs 1 out of 10 times, you are probably dealing with deadlocks or other locks. That, you can investigate as well.
Note, that when you have brought the duration of your transactions to a minimum, the number of (dead)locks can decrease drastically... .
I would go choke out one of your co-workers that went to SQL Perform class if I were you and make them tell you all of these answers.
After my attendance at a course, I can honestly say that I believe most of the problems come from the indexes and SIFT buckets.
It only makes sense that the more updates of SIFT buckets you have the longer the SQL transactions are going to take. I am not sure but I think that you can also experience locking on these tables. However, like others have said, if you drop the SIFT buckets, then reporting will take longer because you'll have to calculate the values. That being said, SQL is supposedly pretty good at doing the calculations, but I would leave them alone for large tables that require reporting capability.
I would then focus on the SQLindex. Pick a good key for your SQLIndex. That will help tremendously.
That's all that I feel comfortable saying with any degree of accuracy. I've done some tuning and it has really helped.
SQL Perform taught how to use the tools with many examples and practical exercises.
Your guys that went should really be able to answer your questions. Get their handouts and get them to show you some of the SQL tools and how to use them... SQL Profiler, Session Monitor you've probably seen them, now is the time to learn them. Looking forward I believe that the Native Navision DB is going away and that we'll have to know SQL. (Naturally my opinion with no official basis)
I would then focus on the SQLindex. Pick a good key for your SQLIndex. That will help tremendously.
Amen!
An important thing to remember is that:
- Native uses the key to get the data (ISAM)
- SQL only uses the key (in SETCURRENTKEY) to know how the resultset should be sorted. You should force him to use a good key (SQL Index property).
Your guys that went should really be able to answer your questions. Get their handouts and get them to show you some of the SQL tools and how to use them... SQL Profiler, Session Monitor you've probably seen them, now is the time to learn them.
It is not really easy stuff. The people who should attend the class should know (more than) their way in T-SQL, SQL Server and C/SIDE.
I have worked with SQL since its early OS/2 days, and have suffered through the SQL-ization of a few accounting systems. Navision is not alone in this issue. Many other product have taken similar approaches.
"Hey, our product runs great on Btrieve". Let's just slap the same code on SQL. Then we can tell the world we run on SQL.
We're the ones that get to suffer the fallout from these "marketing decisions". Navision has choosen a particular method for integrating with SQL, and (right or wrong) we must live with the issues this causes.
SQL is a complex product and you will not become a SQL-tuning expert by going to class for a few days. What you will learn is how to think about the problem. Even after over 15 years I discover new things all the time. The learning never stops and this site has become a valuable tool.
You must also weed your way through the junk and mis-information. I am currently working with a customer whose hardware integrator is insisting that Microsoft recommends RAID 5 for SQL (including transaction logs).
It would be great to have a general solution that works everywhere, or better yet a product that runs well out of the box. We will probably need to wait on Microsoft for either of these, as they would require a change in the basic integration approach.
I am currently working with a customer whose hardware integrator is insisting that Microsoft recommends RAID 5 for SQL (including transaction logs).
Just a side note (sorry for going off topic), but unfortunately they do, for implementations up to 30-40 users max. There's a hardware recommendations doc from MSFT here in the download section.
Exactly my point about mis-information. Microsoft does suggest RAID 5 may be used in systems that experience a high read volume and little writing. Archive systems used for reporting are good examples. RAID 5 is a poor choice of OLTP systems. Download the following White Paper from TechNet.
I am currently working with a customer whose hardware integrator is insisting that Microsoft recommends RAID 5 for SQL (including transaction logs).
Just a side note (sorry for going off topic), but unfortunately they do, for implementations up to 30-40 users max. There's a hardware recommendations doc from MSFT here in the download section.
The site I am referring to is 40 GB and 60+ users. I downloaded the document you are referring to. On page 17 it seems to recommend RAID 10. I could not find the reference to RAID 5.
Last week at TechEd it-forum there was a presentation by a HP technical guy. He was speaking about performance and on his slides was in CAPITALS: Don't use RAID5 for OLTP!
After the session I went to him and asked: Why don't your salespersons say the same to my customers and he responsed: 'Salespeople just do not understand'.
OK. What'll do now is the following: check what kind of SQL statement is ran when you have
- no SIFT (should be an entry table scan)
- full SIFT (should be only a SIFT table scan)
- halfway SIFT (f.e. G/L Account No. but no Date - that's what I'm curious about)
Can we agree in that one hand, slower or faster performance is truly a question of tuning, because it's normal for a database to behave differently under different conditions, but complete freezes, white screens of death, either caused by locking are not part of normal behaviour and for these looking for a general solution is not futile?
What I have learned so far is:
- lastest MDAC - if you run XP SP2 then it's already there
- changing code to not lock on empty tables (Reservation Entry and Journal Line Allocation are the two typical suspects as of yet)
- locks on ERROR - at least in custom code replacing them with MESSAGE(errortext); ERROR(''); ?
Yes, the weird thing about SQL is that on one hand it's superb for queries that involve a lots of JOINs and are not optimal as regard to indexes. About a year ago I've written queries for MSCRM - when creating indexes was not allowed - that were as horrible as it can get, ten JOIN's on each query, and still it was of usable speed. Quite amazing.
On the other hand, that simple sequential processing that's easy for ISAM databases sucks under SQL so much. SQL always thinks in "sets", actually, it's derived from the Boyce-Codd theory of relational database handling which in turn derived from set theory in mathematics. When we good ol' ISAM boys think of looping through records and comparing to other records, mathematicians of set theory think of the intersection of sets, and so on.
Actually, some friends who are writing a banking system, actually wrote an ISAM database engine for themselves, because they found neither Oracle nor MS SQL can handle that properly. For example, how do you calculate daily interest on twenty million bank accounts each evening? For ISAM it's easy - loop through bank accounts, loop through transactions, add them up the get the balance, calculate the interest etc. But this is something Oracle or MS SQL just cannot handle. All the answer they got from an Oracle expert was "Well, you have to write queries that result in small result set." Thanks. So they rather wrote an own database engine.
The result of the experiment: when the SIFT buckets are not maintained to all the fields in the FlowFilters set, Navision will ignore it and scan the entry table. But if you have a posting date bucket for day level, that works for month and year levels too. So the only general thing to do about SIFT is to turn off year/month etc. levels. I'm not sure that can give a big boost but at least it's something very general that can be done to each entry table.
The result of the experiment: when the SIFT buckets are not maintained to all the fields in the FlowFilters set, Navision will ignore it and scan the entry table. But if you have a posting date bucket for day level, that works for month and year levels too. So the only general thing to do about SIFT is to turn off year/month etc. levels. I'm not sure that can give a big boost but at least it's something very general that can be done to each entry table.
Yes, this will make writes faster, because each level = one additional INSERT or MODIFY for each INSERT or MODIFY of record in master table.
What do you think, turning on all logging and performance monitoring tools, writing a function that copies and posts 100 past sales and purchase orders in a batch, and running it at 5 computers with 5 sessions each giving a total of 25, would give some meaningful results on what problems to expect on a mostly standard installation?
I'm having this problem at the moment on "Requisition Line" table. I think it has something to do with the fact that Navision generates a '>=' in the WHERE-clause of the SELECT-statement... .
is there any solution yet?
We have the same performance issues on different customers. The latest HF of SQL Server 2005 (Version 9.0.2196) didn't solve our problems.
Most of our problems are on G/L Entry and Cust. Ledger Entry. It's probably not a problem of Navision because it is "sporadically".
One time the SELECT-Statement is executed fast and sometimes it takes more than 10 minutes. So it couldn't be a problem of SIFT-Index or MaintainSQLIndex.
as I wrote our customers are using SQL Server 2005 (Version 9.0.2196). On the customer side the server is only used for Navision and on the database no other application (excluding Navision) is reading data.
The hardware doesn't matter because our customers have different machines with different configurations and from different brands.
The problem occurs several times a day. There is no explicit time window where the application is running slow. It even runs slow if only one user is working on the database.
Since we started with navision the performance decreased after every update. The last update to NAV 4,0 SP3 and SQL 2005 was catastrophic.
After a few weeks Microsoft came with this 'solution'.
In navision make a new (unuseful) key and make it clustered, make the primery key unclustered
save the table
then run the underlying command in sql server
Use [Customer Database]
update statestics xxx with fullscan
where xxx is the table name.
this helped a lot. The situation is now more 'stable ' . Particularly with the search function who could be from 1 second to 20 minutes even on fields from the primary key.
For the moment we do some tests with sql 2005 SP2 who could have solved this problem
Comments
Actually, the thing I was trying to get across was that you should use some of the available diagnostic tools to evaluate what the problem really is before assuming what it is.
Joe
http://www.interdynbmi.com
Well, right, maybe I should start with a definition what "performance problem" means. It's not that posting a journal takes ten seconds per line - who the hell cares about that. At least in that case the system is working and therefore nobody can blame you. The real problems are when an card or a drill-down opens in five or ten minutes - the well-known "white screen of death" when the client turns white and freezes. Because from an end-user viewpoint it's clearly an error. It's quite possible that these are rather locking issues than simple index/sift overhead and therefore all these index optimalization strategies I'm trying to define are quite useless.
When the system hangs 1 out of 10 times, you are probably dealing with deadlocks or other locks. That, you can investigate as well.
Note, that when you have brought the duration of your transactions to a minimum, the number of (dead)locks can decrease drastically... .
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
After my attendance at a course, I can honestly say that I believe most of the problems come from the indexes and SIFT buckets.
It only makes sense that the more updates of SIFT buckets you have the longer the SQL transactions are going to take. I am not sure but I think that you can also experience locking on these tables. However, like others have said, if you drop the SIFT buckets, then reporting will take longer because you'll have to calculate the values. That being said, SQL is supposedly pretty good at doing the calculations, but I would leave them alone for large tables that require reporting capability.
I would then focus on the SQLindex. Pick a good key for your SQLIndex. That will help tremendously.
That's all that I feel comfortable saying with any degree of accuracy. I've done some tuning and it has really helped.
SQL Perform taught how to use the tools with many examples and practical exercises.
Your guys that went should really be able to answer your questions. Get their handouts and get them to show you some of the SQL tools and how to use them... SQL Profiler, Session Monitor you've probably seen them, now is the time to learn them. Looking forward I believe that the Native Navision DB is going away and that we'll have to know SQL. (Naturally my opinion with no official basis)
Joe
http://www.interdynbmi.com
An important thing to remember is that:
- Native uses the key to get the data (ISAM)
- SQL only uses the key (in SETCURRENTKEY) to know how the resultset should be sorted. You should force him to use a good key (SQL Index property). It is not really easy stuff. The people who should attend the class should know (more than) their way in T-SQL, SQL Server and C/SIDE.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
"Hey, our product runs great on Btrieve". Let's just slap the same code on SQL. Then we can tell the world we run on SQL.
We're the ones that get to suffer the fallout from these "marketing decisions". Navision has choosen a particular method for integrating with SQL, and (right or wrong) we must live with the issues this causes.
SQL is a complex product and you will not become a SQL-tuning expert by going to class for a few days. What you will learn is how to think about the problem. Even after over 15 years I discover new things all the time. The learning never stops and this site has become a valuable tool.
You must also weed your way through the junk and mis-information. I am currently working with a customer whose hardware integrator is insisting that Microsoft recommends RAID 5 for SQL (including transaction logs).
It would be great to have a general solution that works everywhere, or better yet a product that runs well out of the box. We will probably need to wait on Microsoft for either of these, as they would require a change in the basic integration approach.
RIS Plus, LLC
http://www.microsoft.com/technet/prodtechnol/sql/2005/physdbstor.mspx
The site I am referring to is 40 GB and 60+ users. I downloaded the document you are referring to. On page 17 it seems to recommend RAID 10. I could not find the reference to RAID 5.
After the session I went to him and asked: Why don't your salespersons say the same to my customers and he responsed: 'Salespeople just do not understand'.
- no SIFT (should be an entry table scan)
- full SIFT (should be only a SIFT table scan)
- halfway SIFT (f.e. G/L Account No. but no Date - that's what I'm curious about)
Can we agree in that one hand, slower or faster performance is truly a question of tuning, because it's normal for a database to behave differently under different conditions, but complete freezes, white screens of death, either caused by locking are not part of normal behaviour and for these looking for a general solution is not futile?
What I have learned so far is:
- lastest MDAC - if you run XP SP2 then it's already there
- changing code to not lock on empty tables (Reservation Entry and Journal Line Allocation are the two typical suspects as of yet)
- locks on ERROR - at least in custom code replacing them with MESSAGE(errortext); ERROR(''); ?
On the other hand, that simple sequential processing that's easy for ISAM databases sucks under SQL so much. SQL always thinks in "sets", actually, it's derived from the Boyce-Codd theory of relational database handling which in turn derived from set theory in mathematics. When we good ol' ISAM boys think of looping through records and comparing to other records, mathematicians of set theory think of the intersection of sets, and so on.
Actually, some friends who are writing a banking system, actually wrote an ISAM database engine for themselves, because they found neither Oracle nor MS SQL can handle that properly. For example, how do you calculate daily interest on twenty million bank accounts each evening? For ISAM it's easy - loop through bank accounts, loop through transactions, add them up the get the balance, calculate the interest etc. But this is something Oracle or MS SQL just cannot handle. All the answer they got from an Oracle expert was "Well, you have to write queries that result in small result set." Thanks. So they rather wrote an own database engine.
This is obsolete. This was solved by client version 3.70B.
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
news from ms: the performance issue is guessed to be a sql 64bit problem....to be continued...
I'm having this problem at the moment on "Requisition Line" table. I think it has something to do with the fact that Navision generates a '>=' in the WHERE-clause of the SELECT-statement... .
We're not running on a 64-bit machine.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
We originally saw this issue on a 32 bit system.
Probably a stupid question but ... bbrown, what's the best way to handle this problem at this moment in your opinion?
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
is there any solution yet?
We have the same performance issues on different customers. The latest HF of SQL Server 2005 (Version 9.0.2196) didn't solve our problems.
Most of our problems are on G/L Entry and Cust. Ledger Entry. It's probably not a problem of Navision because it is "sporadically".
One time the SELECT-Statement is executed fast and sometimes it takes more than 10 minutes. So it couldn't be a problem of SIFT-Index or MaintainSQLIndex.
greets
David
as I wrote our customers are using SQL Server 2005 (Version 9.0.2196). On the customer side the server is only used for Navision and on the database no other application (excluding Navision) is reading data.
The hardware doesn't matter because our customers have different machines with different configurations and from different brands.
The problem occurs several times a day. There is no explicit time window where the application is running slow. It even runs slow if only one user is working on the database.
greets
David
microsoft is aware of this problem and investigating.
no news yet. i will post it when any change will come.
After a few weeks Microsoft came with this 'solution'.
In navision make a new (unuseful) key and make it clustered, make the primery key unclustered
save the table
then run the underlying command in sql server
Use [Customer Database]
update statestics xxx with fullscan
where xxx is the table name.
this helped a lot. The situation is now more 'stable ' . Particularly with the search function who could be from 1 second to 20 minutes even on fields from the primary key.
For the moment we do some tests with sql 2005 SP2 who could have solved this problem
Document tables?
Ledger entries?
What table specific caused the best improvement, en what key did you create?
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
there is a sql hotfix in work and there will be an update of finsql in (far) future.
more to come.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
i will / must update our solution to SP3 an SQL2005.
I hope we've not this problems ....
For this, i must connect my contacts in Hamburg and München.