SQL 2005 Performance Issue

135678

Comments

  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    Yes, I think some of our guys were at a workshop held by Hynek. Brought back lots of useful ideas, but as far as I see these are just guidelines on solving each and every problem on it's own. Which means days and days of "tuning" still need to be incorporated into each and every project, the only difference being that maybe with his guidelines it's less days. But I think these days are damn hard to sell. To big projects maybe not so much, but we are trying to move towards the low end, selling pre-customized and pre-configured installations with very low implementation time. And also another big problem is existing customers, who didn't expect it would be such a big problem when they bought Navision, often from somebody else, with a native database, 4-6 years ago... they also find it very hard to accept to buy days or weeks of SQL "tuning".

    Besides, Hynek's notes often contain ideas that are impossible to implement without huge changes in business logic. F.e. he advises against low selectivity keys, especially boolean and option fields. Whell, what about Document Type, No. of orders, order lines etc. ? Changing that would be quite a big impact. Or, shall I go to all the important posting codeunits and put FINDSET's everywhere and redo it at each upgrade? It's simply not feasible.
  • Marije_BrummelMarije_Brummel Member, Moderators Design Patterns Posts: 4,262
    Besides, Hynek's notes often contain ideas that are impossible to implement without huge changes in business logic. F.e. he advises against low selectivity keys, especially boolean and option fields. Whell, what about Document Type, No. of orders, order lines etc. ? Changing that would be quite a big impact. Or, shall I go to all the important posting codeunits and put FINDSET's everywhere and redo it at each upgrade? It's simply not feasible.

    SQL Index and Navision key are 2 different properties that can be changed indepenendly.
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    I know that - this why I'm testing what happens if I turn off each and every MaintainSQLIndex and MaintainSIFTIndex except for primary keys. And this is why I'm wondering why is it that nothing happens... ;)
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    Another idea: what if we'd just reduce the quantity of data? I think unless there are some special features f.e. many-many POS transactions, performance does not sink below the acceptable levels in 1 year of operations. So what if on each 1st Jan we just have them create a new company and write a huge batch job that copies all the master data, opening inventory, open customer/vendor entries etc. into this company? Or maybe using the built-in "date compress" batch jobs.
  • Marije_BrummelMarije_Brummel Member, Moderators Design Patterns Posts: 4,262
    I could try to explain to you on this forum why it is that it looks like nothing happens.

    Please have a look at the webcasts on www.sqlskills.com. If after that you still have questions we can work it out somehow.
  • Marije_BrummelMarije_Brummel Member, Moderators Design Patterns Posts: 4,262
    Another idea: what if we'd just reduce the quantity of data? I think unless there are some special features f.e. many-many POS transactions, performance does not sink below the acceptable levels in 1 year of operations. So what if on each 1st Jan we just have them create a new company and write a huge batch job that copies all the master data, opening inventory, open customer/vendor entries etc. into this company? Or maybe using the built-in "date compress" batch jobs.

    I give up...
  • DenSterDenSter Member Posts: 8,307
    The posting process has about as many reads as it has writes, so by turning off all those indexes you've shifted the performance problems to another side.

    That is why you have to analyze these things and 'tune' it for your specific case. One customer uses item tracking, so you need to optimize indexes in that area rather than just wiping them out. Another customer uses manufacturing and will need a completely different set of keys to be optimized. There simply is no standard set of instructions, just a methodology that will help you analyze the issues.

    If you do not understand this you'd better leave this kind of stuff to the experts. Clearly the people you work with that went to Hynek's workshop completely missed the point. I've taken the workshop myself and it was an eye-opener to me. Instead of dismissing it because you don't understand, I'd advise you to do some research.
  • DenSterDenSter Member Posts: 8,307
    I know that - this why I'm testing what happens if I turn off each and every MaintainSQLIndex and MaintainSIFTIndex except for primary keys. And this is why I'm wondering why is it that nothing happens... ;)
    Well let us know when your customers throw you out when you completely f&%k up their systems, so we can come in and rescue them from you :mrgreen:
  • kinekine Member Posts: 12,562
    I know that - this why I'm testing what happens if I turn off each and every MaintainSQLIndex and MaintainSIFTIndex except for primary keys. And this is why I'm wondering why is it that nothing happens... ;)

    Why?
    1) Because if you disable all MainTaingSQLIndex and MaintainSIFTIndex, you will have super fast writing but very slow reading. If you enable all, you will have slow writings but fast reading (bat not so fast as with another setting). Optimalization is the process, where you are trying to find balance between fast writes where needed, and fast reads where needed. (there are just two values YES/No, it is just like black&white)

    2) Much better is to check the SIFTLevelsToMaintain property to lower no. of levels maintained on SQL server, and SQLIndex to change the index into index with good selectivity...(there are many values, you can select, it is not just Yes/No, it is like gray scale on which you can move)
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    Sorry Mark, if I look "headstrong" :) But I cannot accept that normal, day-to-day operation requires in-depth knowledge and tuning. Even Microsoft cannot be so stupid as to release two products that only a small group of SQL experts is able to get to work together even on a very-very basically acceptable level. There would already be a public scandal if it would be so, with the tech newspapers full of article laughing with glee on MS. IT is known to be an irresponsible industry but it can't be SO irresponsible, I think it's cannot be possible... As there is no scandal around, therefore I think there must be something that just missed our attention, some fairly simple procedure that guarantees not fast, but at least somewhat usable performance. Therefore I'll go on with this experiment of generating entries and seeing what happens. Maybe nothing useful comes out of it but I must try.
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    Thanks Kine,

    does it mean that in a the posting processes (I'm only investigating them as of yet) there are so intensive read operations that slow reading can affect their performance?

    SIFT - hm. Let's see if I get this straight. Say, some function or form is using the SIFT for the key Item No.,Entry Type,Variant Code,Drop Shipment,Location Code, sumindexfields Quantity etc. Now, if it's fully SIFTed, it will read the SIFT tables and be fast. It isn't SIFTed, it will read the entry tables which will be expensive. What happens if we have the SIFT bucket only for Item No. f.e. (which has high selectivity)? Will in some magic way combine the SIFT and the entry tables and the result will be faster than not using SIFT at all?

    As for keys. OK, turning off the index for low selectivity keys, which I think mostly mean the ones starting with some kind of Type, instead of some kind of No., right? What happens if we leave the index on other keys - when a SETCURRENTKEY would need to use this key, but does not find an index, will rather use an other index of a "similar" key and therefore it will be faster than having no indexes at all?
  • DenSterDenSter Member Posts: 8,307
    I hope your boss is reading this, because you are going to cause MAJOR problems with your customers if you keep going in this direction. :shock:
  • DenSterDenSter Member Posts: 8,307
    does it mean that in a the posting processes (I'm only investigating them as of yet) there are so intensive read operations that slow reading can affect their performance?
    :shock:

    You have GOT to be kidding me...... you don't think that by destroying indexes, you will also destroy read performance? You don't know that there are hundreds of read operations in the posting routines???
  • DenSterDenSter Member Posts: 8,307
    I give up...
    me too...
  • kinekine Member Posts: 12,562
    does it mean that in a the posting processes (I'm only investigating them as of yet) there are so intensive read operations that slow reading can affect their performance?
    1) Enable client monitor
    2) do some posting
    3) disable client monitor
    4) process the data with extended client monitor (the additional objects on MBS Tools CD)
    5) Export the data into Excel
    6)Create pivot table and look, which commands are taking most of the time
    7) Try to optimize them
    SIFT - hm. Let's see if I get this straight. Say, some function or form is using the SIFT for the key Item No.,Entry Type,Variant Code,Drop Shipment,Location Code, sumindexfields Quantity etc. Now, if it's fully SIFTed, it will read the SIFT tables and be fast. It isn't SIFTed, it will read the entry tables which will be expensive. What happens if we have the SIFT bucket only for Item No. f.e. (which has high selectivity)? Will in some magic way combine the SIFT and the entry tables and the result will be faster than not using SIFT at all?
    For example - why to maintain SIFT level for Posting Date:Year?, you can have level for Posting Date:Day and sum this detail SIFT to have the sum for the year... Better is to maintain more detailed level and disable the general one.
    As for keys. OK, turning off the index for low selectivity keys, which I think mostly mean the ones starting with some kind of Type, instead of some kind of No., right? What happens if we leave the index on other keys - when a SETCURRENTKEY would need to use this key, but does not find an index, will rather use an other index of a "similar" key and therefore it will be faster than having no indexes at all?
    Not turning them off - change them into Index with better selectivity. It is why there is the property SQLIndex.

    SETCURRENTKEY is not talking about index. It does nothing with MS SQL. it will just add "ORDER BY" clausule to the end of the command. Nothing more. SQL will select index which will be best for the job, based on the statistics ( - selectivity).

    As you can see, this is why you need to know something about how the MS SQL is working... :-)
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    DenSter - I think so, I even found some stuff I uploaded to mibuso in a database where he did every development :) But keep in mind it's just an experiment, of trying to find some general solution. What's wrong with that? I think it must be possible to find some generalizable solutions, that at least does away with the majority of the problems. Even if we assume MS totally didn't mean these two products to work together without major tweaking - would be a horrible assumption but let's make it - at least we can trust in the 80 - 20 rule.

    I've read some other topics and it seems to me that going to document and entry tables, turning of index maintanence for secondary keys starting with options, booleans or master data of too few entries (posting groups f.e.) and rearranging the SQLIndex property of document header/line primary keys to put document type to the second position - well, it looks like a good start for a general solution.

    Now, something similar needs to be found for SIFTs. Maybe maintaining it to month level, that's a good idea, as most of the time accounting schedules etc. are viewed by months?
  • WaldoWaldo Member Posts: 3,412
    I give up...
    Shall I take over :wink:

    This is my opinion:
    I also wanted to create one standard database, where everything is tuned "in general".
    I got back from that idea after tuning (only) 2 customers (Mark helped in one of them). I just realised that everything depends on how the users use the database. If you decide NOT to Maintain an SIFT (level), your reports will take longer, but your (posting) transactions will go faster. You see? These are choices that have to be made by the user. I had to do the exact oposite by the second customer as I did by the first one... .

    There is no "one way to go". Ledger entries, document tables, journal tables ... they should all be treated differently.

    Date compression is known "performance boost" (also recommended by Hynek), but only do-able is you can spare the data.

    And like Hynek said: focus on "hot tables" and "hot transactions".

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Marije_BrummelMarije_Brummel Member, Moderators Design Patterns Posts: 4,262
    Thank you Eric. O:)

    I had a great time with you in Belgium. Man I love that beer :mrgreen:
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    OK, I understood that before a general solution could be found, we need to formulate some general principles first.

    How about this one: "If there is a field or two that provides high selectivity and almost all queries filter on this field, then probably it's the only index you need, because it will yield acceptably small result sets?"

    If it's true, then can we use just some common sense and application knowledge to figure out some typical settings instead of long and complicated profiling?

    F.e.

    Almost every kind of read on Item Ledger Entry and Value Entry is done by Item No., and as one Item rarely has more than a few hundreds of transactions, having and index on that must be enough for almost anything. Maybe a key for Applied Entry to Adjust might be important to speed up Inventory Adjustment and another on Open for applications at posting (and maybe reservations).

    For Customers/Vendors the same is true for Customer No. / Vendor No. , but Open might also be important for applications and for reports which usually show only open entries.

    For G/L, well, there might be many entries for G/L Account No., but we are almost always filter for the date as well, so a G/L Account No., Posting Date index might do.

    What do you think of this approach?
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    Waldo,

    but didn't that mean that they had completely different features (customizations and add-ons) and did not use the same standard features as well? F.e. a financial services company where everything happens on G/L Accounts might be different from a trading company. Of course one cannot find a general solution in this case. And also tuning custom developments is easy. But I'm just thinking of the, you know, typical trading/distribution/manufacturing folks who mostly get by the standard features.
  • WaldoWaldo Member Posts: 3,412
    Waldo,

    but didn't that mean that they had completely different features (customizations and add-ons) and did not use the same standard features as well? F.e. a financial services company where everything happens on G/L Accounts might be different from a trading company. Of course one cannot find a general solution in this case. And also tuning custom developments is easy. But I'm just thinking of the, you know, typical trading/distribution/manufacturing folks who mostly get by the standard features.

    The general difference was business process. One company did batch posting at night, and the other one during the day ...

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • DenSterDenSter Member Posts: 8,307
    but didn't that mean that they had completely different features (customizations and add-ons) and did not use the same standard features as well?
    Aha you are getting the point :mrgreen:
  • WaldoWaldo Member Posts: 3,412
    OK, I understood that before a general solution could be found, we need to formulate some general principles first.

    How about this one: "If there is a field or two that provides high selectivity and almost all queries filter on this field, then probably it's the only index you need, because it will yield acceptably small result sets?"

    If it's true, then can we use just some common sense and application knowledge to figure out some typical settings instead of long and complicated profiling?

    F.e.

    Almost every kind of read on Item Ledger Entry and Value Entry is done by Item No., and as one Item rarely has more than a few hundreds of transactions, having and index on that must be enough for almost anything. Maybe a key for Applied Entry to Adjust might be important to speed up Inventory Adjustment and another on Open for applications at posting (and maybe reservations).

    For Customers/Vendors the same is true for Customer No. / Vendor No. , but Open might also be important for applications and for reports which usually show only open entries.

    For G/L, well, there might be many entries for G/L Account No., but we are almost always filter for the date as well, so a G/L Account No., Posting Date index might do.

    What do you think of this approach?

    For difference between NAV index and SQL Index, and taking into account the different types of tables (ledgers, document, ...), I think this could be a good generalisation.

    But for MaintainSQLIndex, MaintainSIFTIndex ... it's another story ... as well as re-writing the code :(

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Marije_BrummelMarije_Brummel Member, Moderators Design Patterns Posts: 4,262
    edited 2006-11-21
    OK.

    So having a general approach when starting to tune is possible. Off-course. This is the main part of our workshop.

    What is more important is to create the awareness that SQL likes to be pampered and there is nothing wrong with that. Every application running on a sql database needs that.

    Larger companies wil love running on SQL and having ultra fast OLTP databases and they don;t mind investing tuning and maybe set up an OLAP database.

    Smaller companies will not experience much troubles when running on SQL out of the box because their datasets are usualy smaller and SQL can perfectly handle that.
  • Joe_MathisJoe_Mathis Member Posts: 173
    Miklos,

    Generally I don't post because mostly opinions are being exchanged and not facts. But I am surprised by what you are posting. Navision was not originally designed with SQL server in mind. So in my opinion to expect it to run flawlessly on SQL is a flawed idea in itself. I would expect it to perform well without major modification, but would definately expect to tweak it to get the best performance. It's just as you would tune a gaming system to get the best frame rates, a piano to sound better, a car to run better.
    But I digress...

    Instead of just making random changes, I believe a better tactic would be to view the actual SQL statements that are being sent to the SQL server and figure out the why's and how's of the conversion from C/Side to SQL.

    Please only do the following on a test system, and warm it up with some transactions first.

    Create a routine, (hopefully one that is giving you problems) and turn on Client monitor (with the option of showing SQL statements). Start it, run through your routine, and then stop it.

    Filter the Client monitor results on Parameter No 30 and that will show all the SQL statements. How many are from sift buckets? Now you can mess with your keys... at least you will be able to see what you are doing.

    Also think about learning more about SQL monitoring tools on the SQL server side. These will enable you to make informed decisions about how to proceed with your troubleshooting, or at the very least, an educated guess...

    I would also suggest listening to Mark Brummel if he's in fact working with Hynek and SQL perform. If Microsoft is hiring Hynek for SQL advice, and you're getting it by proxy, how bad can the advice be?

    I personally would not dismiss him so quickly.

    Joe
  • WaldoWaldo Member Posts: 3,412
    Thank you Eric. O:)

    I had a great time with you in Belgium. Man I love that beer :mrgreen:

    \:D/
    But let's not make this one a "beer"-topic :wink:

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    DenSter,

    OK, then I'll repeat my point: it's for all those small trading/distribution/manufacturing clients who can't afford big implementation projects and mostly get by the standard features. Did you assume I'm so stupid that I don't assume that if somebody is having a custom table with one and half million entries then perhaps it needs to be looked at individually? :):):)

    Meanwhile I'm continuing investigation, here is the next hypothesis:

    General Solution V0.2: "OK, so if we have an index on document headers/lines for Document No., for entries the relevant master data (Item No., Customer No.) and we turn off everything else on these tables, then we are at least going towards the right direction. Everything else is turning some others on, based on how important we decide the given function is. F.e. all the Document No., Posting Date keys used only by Navigation, which is not so very important, it can be off. On Item Ledger Entris, Open are important for application and Applied Entry to Adjust for adjustment, therefore an Item No., Open and Item No., Applied Entry to adjust are the only two indexes we need. For each table, we can make such general decisions based on how the application works. Basically, we have to figure out with Developer's toolkit which key is used where. Write it down in a list. Then, at a project, find the three most important keys based on that list and make them into indexes, and no more."
  • thaugthaug Member Posts: 106
    Miklos, I see where you are coming from somewhat. However, we can not expect SQL to run great out of the box, that is what Native is for. It has been a very long process of constant tweaking for me with SQL, and thanks to boards like this one, I have learned a lot. The newer client executables also have helped, compared to where we were with 2.6.

    What I really think you are asking for is an updated Tuning Navision for Better Performance document. The latest version I have is revision 5, from May 2004, before SQL 2005 was released. I would like for MS to revise this document taking in to account newer software versions, and faster hardware.
    There is no data, only bool!
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    Jondoba,

    "I would expect it to perform well without major modification" - but this is not what I'm writing about. Not minor tweaks and minor increases. What I mean is after a year, many installations become completely unusable, taking 15 minutes to open Item List from a Purchase Order etc. This means something is deeply blown and therefore I think some general solution must exist, because, as I agree with you, "tweaks" should only be used to improve performance from "elderly" to "lightning", not from "frozen snail" to "elderly". I think there must be some general ways to configure a "elderly" performance on each project in half a day or so. This is what I'm investigating. I'm not dismissing Mark's and Hynek's ideas, actually, I know Mark is good and some of our folks were at a training at Hynek, but the whole point is that I think there must be "rule of thumb" solutions for going to "frozen snail" to "elderly" perfomance without deep wizadry and lots of analysis. It must be possible to configure a fairly OK system without much analysis, just in the same way that if somebody describes a client's business very generally, we can name at least five general customizations they will need. I think the same logic must be able to applied to performance as well.

    Actually I collected some of the general solutions already from mibuso topics. The most important discovery was that often when the damned client freezes, it is not actually a slow performance, but a lock, and I found in these forums three things can help:

    1) latestestestestestest MDAC
    2) locking on empty tables - bloody Reservation Entry - so there is no way to avoid modifing code, we have to add IF NOT Table.ISEMPTY THEN Table.LOCKTABLE
    3) ERROR's are locking until the user presses OK, and because it would not be feasible to change them to MESSAGE - ERROR('') combinations, it's important to tell users to OK them quickly

    Also, I've learned some important things from my collegues who are experience in these things, such as the importance of statistics.

    But these all seem like a random bunch of "try this, if it does not work, try that". We need to make a reliable cookbook for these things to be configured very quickly at each project - because it's a huge waste of time.
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    Thaug,

    no, it's not what I'm asking for. Actually I'm asking for nothing, I'm trying to make a realiable cookbook myself, I'm just posting some of the ideas here to see other's reactions. What I mean as a reliable cookbook is not a troubleshooting guide, but a step-by-step guide configure the whole index/SIFT thing so that trouble does not arise at all in the first time.

    I understand I cannot expect that from Microsoft, because 4 years ago I was learning Navision from an application and then from programming viewpoint, I could not get any document from MS describing what features to use at a typical company of a given industry and what are weak points, what are the typical requirements etc. So if they were unwilling to prepare a proper functionality FAQ, then I understand that they are unwilling to provide the similar step-by-step guide of "If you use standard costing, turn off key X." But this is what you need if you want to make swift projects. So OK, then, I will write this guide myself. Throughout these years I've been doing so many things that should have been done by MS that one more such thing does not count much anymore... I'm just angry that after all the certifications and the years and years of experience, we still have things to "discover" and "figure out" and "investigate" if this would be the Amazonas jungle 500 years ago or a hideously clever crime investigated by Inspector Columbo, and not a product developed by the biggest software vendor... and all we get is helpful hints for where and how to investigate when it breaks... If a car manufacturer would provide the same amount of information for their partners who do the car repairs, there would be a public scandal... :(

    Eh, just ignore me, I'm in my complaining mood again... :(
Sign In or Register to comment.