Options

SQL Server primary datafile setup

netblazernetblazer Member Posts: 26
edited 2010-08-30 in SQL General
Hi guys, I'm comming from a DBA background (more Dev. and reports than physical setups).

I just had a quick look at our server after 3 months in production post Go-Live. Apparently the initial install was made with 32 GB of datafiles. That seems appropriate for the next 2-3 years of data. However after looking at where the data was placed, I realised that about 99% of the space will only be used on the 2nd filegroup (PneusSuperieur_1_Data) rather than primary file group (where each files have 16 GB of space).

As far as I can tell no transactional table is placed on the primary filegroup nor any indexes.

My question is quite simple : is it safe to shrink that file back to let's say 500 MB (just in case)?

I tried asking that question to the company that did the install but they looked like dears caught in the head light.


Anyone can offer recommendations about that primary file about it's size and filegrowth settings?

TIA.

Comments

  • Options
    rdebathrdebath Member Posts: 383
    You don't say which of the Dynamics applications you're looking at, I'll assume NAV :D

    Assuming it's a default install there will be only the MS-SQL system tables in the primary filegroup. The default size and expansion for this are 40Mbytes and 10% this is often more than enough.

    NAV is very tolerant of the structure of the database at the files and filegroups level, for normal operation it doesn't care at all, though you may be able to confuse it's manual file expansion tool. If you do your maintenance at the SQL level you won't care. Generally, do what you need to, it'll cope.

    For the main file a 16GB start with a 512Mbyte expansion would fit with what you've got already.
  • Options
    netblazernetblazer Member Posts: 26
    Yes this is Nav 2009 SP1 :whistle:.


    Yes I think it's safe to assume that it's the default install. Nothing against the guy who did the install, but besides the world of Nav he seems pretty clueless about SQL Server. I completly lost him right after talking about 2 files on the prod database :-$ .


    Anyhow I think I'l leave the db alone at 32 Gb for now (2 GB of data with only 10% of the years' sales). So 32 Gb should be more than enough for the first year, then I'l reevaluate after 1 full year in prod.


    Thanks for the quick response.
  • Options
    DenSterDenSter Member Posts: 8,304
    It is quite common for NAV people not to know about SQL Server, although certification is a requirement for partner status.

    One thing I'll mention is please do not modify or add or remove any indexes in SQL Server directly. If you find that certain indexes are not used, or if you want to modify anything, they should be modified in the NAV table designer. There are some very nasty performance side effects when indexes on SQL Server don't match exactly what is in the C/SIDE table object, or if C/AL code does not match the index exactly.
  • Options
    netblazernetblazer Member Posts: 26
    DenSter wrote:
    It is quite common for NAV people not to know about SQL Server, although certification is a requirement for partner status.

    One thing I'll mention is please do not modify or add or remove any indexes in SQL Server directly. If you find that certain indexes are not used, or if you want to modify anything, they should be modified in the NAV table designer. There are some very nasty performance side effects when indexes on SQL Server don't match exactly what is in the C/SIDE table object, or if C/AL code does not match the index exactly.


    Can you expand on this? I would never dream of deleting or modifying native nav indexes but I've never seen any harm in creating new indexes for my own reports (sql server reporting services) or even order web application.

    I've already put defrag jobs which only take care of the big and fragmented indexed with a switch between rebuild and defrag at 30% fragmentation. I've had this running on 2 prod servers for about 2 years now and if anything, the performance is only better. On top of it I've never had any problems with nav doing this.

    I know nav dev. can hit the magic "tune sql server" button which simply rebuilds all the indexes in the background but let's just say this is not a good idea to do this in production hours ](*,) .

    Moreover native sql server indexes offer a lot more capabilities and options (online, sort in tempdb, included columns, filtered indexes...I'm on 2008 btw). I've found that I can much further tune my queries that way rather than using the Nav gui.

    Of course I always stay away from adding unique indexes to make sure I don't break anything.
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    Hi netblazer,

    in simple terms, just like the Nav partner does not know enough about SQL; you don't know enough about Navision. Unfortunately Navision does not use standard queries that you would expect. The reasons are due to maintaining backward compatibility, but that is not significant to this discussion.

    with the small size of this database, you shouldn't have any issues, but as the databases get large (100+ gig) it needs someone that knows both Navision AND SQL to keep the system running smoothly.

    I am a bit surprised though that the partner made the primary file (which I assume is the mdf file) so large. Not a good sign.
    David Singleton
  • Options
    rdebathrdebath Member Posts: 383
    edited 2010-05-03
    DenSter,
    I think I'd disagree with you, but only slightly.

    Actually adding indexes is usually okay. BUT, if possible, it best to add the index inside Navision to keep the data dictionaries in synch so that Navision doesn't do something stupid. Deleting indexes is dubious at best though. Modifying an existing index usually pretty safe too as long as you make sure the new index is very similar to the one Nav thinks is there and you're not too worried about NAV reconstructing it. Oddly enough, the only bad effect an extra unique constraint will have is a big nasty error message when you try to violate it.

    The real problem with doing any of this sort of modification is that the SQL that NAV generates isn't what Microsoft optimised SQL2005 and later for. The code generated by NAV used several tricks that would make SQL2000 perform nearly as well as the old native database on fixed tasks like posting and unfiltered forms. But these tricks caused the SQL2005 query optimiser to over optimise and remember far too many query plans causing SQL2005 to make some mind numbingly dumb decisions. NAV2009 has mostly worked around these by dumbing down the SQL sent (and using different cursor types) which has mostly cleared the serious problems, but the query optimiser is still rather fragile for uncommon situations.

    So, in summary, a good DBA should be okay with it but they need to take care.
  • Options
    netblazernetblazer Member Posts: 26
    Hi netblazer,

    in simple terms, just like the Nav partner does not know enough about SQL; you don't know enough about Navision. Unfortunately Navision does not use standard queries that you would expect. The reasons are due to maintaining backward compatibility, but that is not significant to this discussion.

    with the small size of this database, you shouldn't have any issues, but as the databases get large (100+ gig) it needs someone that knows both Navision AND SQL to keep the system running smoothly.

    I am a bit surprised though that the partner made the primary file (which I assume is the mdf file) so large. Not a good sign.

    What is so different about Nav that makes it impossible to a competent DBA to add indexes for OUTSIDE applications that read data straight from SQL?

    I am aware of the sql 2000 compatibility (and I've worked as far back as sql 7 all the way to sql 2008 in other ERP) as well as of the differences between sql levels about the indexes.

    As far as I know, if I don't create the index in Nav, Nav doesn't know about it so it can't mess with it. Also since I don't build any unique indexes the only hit comes from dml statements which I already tested the impact of.


    BTW yes I was refferring to the primary file. I'm assuming he didn't understand that it wasn't going to be used and just went with : that's the way we've always done it so it must be ok.
  • Options
    netblazernetblazer Member Posts: 26
    rdebath wrote:
    DenSter,
    I think I'd disagree with you, but only slightly.

    Actually adding indexes is usually okay. BUT, if possible, it best to add the index inside Navision to keep the data dictionaries in synch so that Navision doesn't do something stupid. Deleting indexes is dubious at best though. Modifying an existing unique index usually pretty safe too as long as you make sure the new index is very similar to the one Nav thinks is there and you're not too worried about NAV reconstructing it. Oddly enough, the only bad effect an extra unique constraint will have is a big nasty error message when you try to violate it.

    The real problem with doing any of this sort of modification is that the SQL that NAV generates isn't what Microsoft optimised SQL2005 and later for. The code generated by NAV used several tricks that would make SQL2000 perform nearly as well as the old native database on fixed tasks like posting and unfiltered forms. But these tricks caused the SQL2005 query optimiser to over optimise and remember far too many query plans causing SQL2005 to make some mind numbingly dumb decisions. NAV2009 has mostly worked around these by dumbing down the SQL sent (and using different cursor types) which has mostly cleared the serious problems, but the query optimiser is still rather fragile for uncommon situations.

    So, in summary, a good DBA should be okay with it but they need to take care.

    Ya I've seen the :shock: mountains of query Nav runs for a simple posting (assuming for compatibility or just because they don't know better). I stopped counting after a couple 100s queries to simply post (and not bill) an order with 1 item on it.






    Maybe this should be in another discussion, but did any of you have any problems with the fact that Nav uses so much nolock and read uncommited hints? I can't help to think that this can't be good for data integrity since it's not enforced at all in the DB.
  • Options
    rdebathrdebath Member Posts: 383
    netblazer wrote:
    Maybe this should be in another discussion, but did any of you have any problems with the fact that Nav uses so much nolock and read uncommited hints? I can't help to think that this can't be good for data integrity since it's not enforced at all in the DB.
    At the top of every NAV posting there's a 'global lock' select like this:

    SELECT TOP 1 * FROM [Comp$G_L Entry_] WITH (UPDLOCK) ORDER BY [Entry No_] DESC

    This locks the last entry in the G/L and prevents any other client getting into a posting routine until this one is finished. So really no other lock, or lack of, actually matters in the slightest for posting and that's just about every lock that matters.

    And yes, it kills multi-user.
  • Options
    DenSterDenSter Member Posts: 8,304
    rdebath wrote:
    DenSter,
    I think I'd disagree with you, but only slightly.
    On pre-5.0SP1 I would agree with your disagreement, but alas from that version up there have been some modifications to the way NAV generates SQL queries.

    It used to be possible to tweak the heck out of indexes and increase performance dramatically, either by changing the order of the fields in SQLIndex or even the fields themselves, by being very selective in which SIFT levels to maintain, etc. This was in the days that NAV left index selection up to SQL Server. So if you had a piece of code that filtered on Customer number and posting date, but sorted by some compound key with 12 fields, you could get away with just modifying the index and performance would get much better. With the current custor types though, the NAV key, the SETCURRENTKEY, as well as the SQL index must all match. You simply can't tune indexes like you used to anymore, and what you are saying ("Modifying an existing unique index usually pretty safe") is simply not true anymore on the current versions.

    Actually with the cursor types that NAV now seems to prefer, it makes a HUGE difference if you 'tweak' indexes. I've seen queries revert to a clustered index scan, even though the SQL index matched the first 5 fields in the ORDER BY clause, but was missing the last 2 fields. Individual queries would have up to 300ms duration because they reverted to clustered index scan. We reverted back to having the NAV key have exactly the same fields as the SQL index, made sure that the C/AL used that particular key, and query performance went down to 0ms.
  • Options
    DenSterDenSter Member Posts: 8,304
    netblazer wrote:
    What is so different about Nav that makes it impossible to a competent DBA to add indexes for OUTSIDE applications that read data straight from SQL?
    <snip>
    As far as I know, if I don't create the index in Nav, Nav doesn't know about it so it can't mess with it. Also since I don't build any unique indexes the only hit comes from dml statements which I already tested the impact of.
    Few things 'weird' about NAV:
    - ALL queries generated by NAV are "SELECT *" queries.
    - NAV forces a certain cursor type that forces the specified index. Somehow they've managed to render the query optimizer useless. Basically what this means is that the KEY in NAV (which is really nothing more than a sort order) MUST be identical to the INDEX on SQL Server. Further, the C/AL code that uses this KEY must address ALL fields of the key.

    My word of caution was focused on working with NAV, not any other external app. There's nothing wrong with adding a key for an external app, but if you are working on performance problems within NAV, you have to know what you're doing and make sure that the NAV Key, the SQL Index and the SETCURRENTKEY in C/AL (which defines the ORDER BY clause) all match exactly.
  • Options
    netblazernetblazer Member Posts: 26
    rdebath wrote:
    netblazer wrote:
    Maybe this should be in another discussion, but did any of you have any problems with the fact that Nav uses so much nolock and read uncommited hints? I can't help to think that this can't be good for data integrity since it's not enforced at all in the DB.
    At the top of every NAV posting there's a 'global lock' select like this:

    SELECT TOP 1 * FROM [Comp$G_L Entry_] WITH (UPDLOCK) ORDER BY [Entry No_] DESC

    This locks the last entry in the G/L and prevents any other client getting into a posting routine until this one is finished. So really no other lock, or lack of, actually matters in the slightest for posting and that's just about every lock that matters.

    And yes, it kills multi-user.


    ](*,) WOW that explains a lot. I had my site destroyed (not a single query could go through) for minutes at a time when a Nav Dev was on site and debugging in the live DB. Since my site needs to use almost all the ledgers, no wonder I couldn't get anything through. Now picture that the website has 3000 logins and we do 2+ Million a week in sales... We lost almost 30 minutes in sales in total...

    That little dance went on for about 90 minutes untill I has HIS login name on his machine as the deadlock starter with about 20 victims. Now with proof on file I have no problem killing his session and siabling his login from the server... but let's just say he wasn't happy about it :lol:

    Anybody had to deal with something similar? Any thoughts on the mattter?

    I personnally don't use hints at all, but from what you just told me, even nolock wouldn't save me in this spot.

    Thanks a million.
  • Options
    netblazernetblazer Member Posts: 26
    DenSter wrote:
    netblazer wrote:
    What is so different about Nav that makes it impossible to a competent DBA to add indexes for OUTSIDE applications that read data straight from SQL?
    <snip>
    As far as I know, if I don't create the index in Nav, Nav doesn't know about it so it can't mess with it. Also since I don't build any unique indexes the only hit comes from dml statements which I already tested the impact of.
    Few things 'weird' about NAV:
    - ALL queries generated by NAV are "SELECT *" queries.
    - NAV forces a certain cursor type that forces the specified index. Somehow they've managed to render the query optimizer useless. Basically what this means is that the KEY in NAV (which is really nothing more than a sort order) MUST be identical to the INDEX on SQL Server. Further, the C/AL code that uses this KEY must address ALL fields of the key.

    My word of caution was focused on working with NAV, not any other external app. There's nothing wrong with adding a key for an external app, but if you are working on performance problems within NAV, you have to know what you're doing and make sure that the NAV Key, the SQL Index and the SETCURRENTKEY in C/AL (which defines the ORDER BY clause) all match exactly.


    Hmm C/al, how the heck did you get a dev license?? Did you cough up the 50 K for dev license just to be able to tune the indexes?
  • Options
    netblazernetblazer Member Posts: 26
    How do you get rid of this error message? I get it whenever I hit a link in the notice e-mail I get from this thread :

    The specified request cannot be executed from current Application Pool.
  • Options
    rdebathrdebath Member Posts: 383
    DenSter wrote:
    I've seen queries revert to a clustered index scan, even though the SQL index matched the first 5 fields in the ORDER BY clause, but was missing the last 2 fields.
    AIUI this isn't really because of the cursor type but because of the instability of the query optimiser. In your good case where the index is an exact match for the order-by the use of a dynamic cursor overrides the normal index selection of the optimiser. When the index isn't quite right it used to be (with SQL 2000) that it was still considered a very good index for data selection. But now, with SQL2005, the engine tends to use the cluster index if things are in the slightest bit adrift.

    As I said, It's the switch to SQL2005 that 'caused' the problems, I don't really understand why SQL2005 thinks Navision's SQL is in some way wrong and I've seen non-Navision complaints about SQL2005 that pretty firmly point the finger at the SQL2005 optimiser being very over-optimistic about it's own abilities. However, IMO, for NAV things like the SQL preprocessing and switching to dynamic cursors have improved operation on SQL2005 to the point that you can, carefully, start messing with things again without it falling in a heap.

    Anyway, that's where I'd like to leave it I think.

    PS: Oops: ("Modifying an existing unique index usually pretty safe") ->
    ("Modifying an existing non-unique index usually pretty safe")
  • Options
    DenSterDenSter Member Posts: 8,304
    edited 2010-05-03
    rdebath wrote:
    AIUI this isn't really because of the cursor type but because of the instability of the query optimiser.
    Whatever the cause, you just made the argument NOT to modify the index, because "something will not compute properly". I'm just sharing my experience of what does and does not work. This behavior has been explained to me by NAV team members, so I would consider that to be reliable information, and the way I understand it is that it has everything to do with the cursor type (although not all cursors are dynamic, so the optimizer explanation also makes sense). It's entirely possible that I misunderstand it though, and probably due to the fact that there's more to it than just cursor type and optimizer, but that does not take away from what I have seen for myself what does and does not work. Modifying indexes directly on SQL Server, for NAV performance problems, when you don't have a clear understanding of both SQL Server and NAV, is simply not the right thing to do. I've been involved in many SQL Server performance projects, and simply undoing changes made directly to the SQL indexes is often the first big step in performance improvement.
    rdebath wrote:
    Anyway, that's where I'd like to leave it I think.

    PS: Oops: ("Modifying an existing unique index usually pretty safe") ->
    ("Modifying an existing non-unique index usually pretty safe")
    I don't want to leave it there, because you are giving people incomplete information. It is NOT always safe to make changes to indexes, it WILL cause performance problems if you don't look at what works for what versions. If you want to do it right, on current versions (NAV 5.0SP1 and up, SQL Server 2005 and up), when dealing with NAV specific performance problems, you need to leave the SQL Indexes alone. If you can prove that there is a problem, or that you can think of a more efficient index, then you MUST do THREE things:
    1 - Make the change in the NAV table object, NOT on SQL Server. Modify the actual key fields, NOT the SQLIndex. The index on SQL Server must match the key in NAV exactly
    2 - Make sure that the key change flows through into the SQL Index
    3 - Make sure that all C/AL code is specific and explicit, SETCURRENTKEY(<AllFieldsInTheKeyOrder>). This of course comes other problems, such as what fields are filtered, and such.
    3a - if you're modifying an existing key, you then also need to a thorough where used analysis and make sure that all related objects are updated to use the new key.
  • Options
    DenSterDenSter Member Posts: 8,304
    Look, I don't agree with what they did, but basically, what they've done is modify the NAV-SQL communication to behave the same way as the C/SIDE DBMS used to work, because that's what everybody is used to. I don't like it that you have to be THIS careful with it. I wish it were like before, where all you needed to do to make most implementations absolutely fly was modify some indexes. I wish they would let SQL Server decide on all queries, I wish we could specify which fields.

    It is what it is though, and you have to adapt. I was floored when I saw that an positively non-selective index was forced into the query, on a table with millions of records, but duration was 0ms on thousands and thousands of queries. I made the index more selective and it slowed to a crawl because the key specified in C/AL did not have an index with the same fields.

    Say what you will, but once you figure out how it works and what to do about it, it is pretty straightforward to work on, although much too time consuming.
  • Options
    DenSterDenSter Member Posts: 8,304
    netblazer wrote:
    Hmm C/al, how the heck did you get a dev license?? Did you cough up the 50 K for dev license just to be able to tune the indexes?
    I work for a NAV partner, and we have a dev license. It doesn't cost 50K, but you do need to become certified NAV partner in order to get one.
  • Options
    netblazernetblazer Member Posts: 26
    DenSter wrote:
    netblazer wrote:
    Hmm C/al, how the heck did you get a dev license?? Did you cough up the 50 K for dev license just to be able to tune the indexes?
    I work for a NAV partner, and we have a dev license. It doesn't cost 50K, but you do need to become certified NAV partner in order to get one.


    How much is it and how do I get one?
  • Options
    rdebathrdebath Member Posts: 383
    DenSter wrote:
    simply undoing changes made directly to the SQL indexes is often the first big step in performance improvement.
    So you're saying the DBA didn't do the job and check the changes that were made fixed things without breaking something important. Okay. (or is this V5.0's premature key optimisation?)
    DenSter wrote:
    Look, I don't agree with what they did, but basically, what they've done is modify the NAV-SQL communication to behave the same way as the C/SIDE DBMS used to work, because that's what everybody is used to.
    Have they actually said that's the intention? I know they forced it with one of the V4 service packs (index hints by default) but my impression overall is that they've been fighting to stop the query optimiser making stupid mistakes. As most of these mistakes are related to the cluster index I suspect getting back the ability to remove it in NAV (say by explicitly setting the primary key to No rather than <No>) would make for some interesting performance changes, after all the clustered index is supposed to be okay for most queries not bad for everything except a tablescan. I think I'll do some testing (by modifying at the SQL level, it's perfectly fine. :wink: )
    DenSter wrote:
    I wish they would let SQL Server decide on all queries, I wish we could specify which fields.
    Humm, I might have remembered an idea, if we could specify that the order-by should match the sqlindex (including no order-by) the optimiser would probably get less confused. It would need client side sorting, but that's just a temp table, easy ... probably ... damn. (Imagine a sort that's half SQL and half native... but SQL needs something, I suppose we're back to "setcurrrentkey()" with no arguments for unsorted.)

    I don't know what syntax you'd use for specifying the fields though, it really doesn't match the ISAM model of the C/AL code. An, very long, list of field names would look silly and be very error prone. Perhaps a second table record as an argument and the data is loaded into that record then TRANSFERFIELDS() to the record we're actually using. Only the fields that are common actually get fetched from the database.

    BTW: I'm used to SQL2000 not native; you don't have to specify a setcurrentkey at all, SQL be just fine. But right now it feels like the Native DB is better in all respects. :x
  • Options
    DenSterDenSter Member Posts: 8,304
    rdebath wrote:
    DenSter wrote:
    simply undoing changes made directly to the SQL indexes is often the first big step in performance improvement.
    So you're saying the DBA didn't do the job and check the changes that were made fixed things without breaking something important. Okay. (or is this V5.0's premature key optimisation?)
    No what I am saying is that a lot of times I will come into a new performance project and notice that someone has added all sorts of indexes, using a variety of tools. After doing my due dilligence, and running the proper scripts to make sure, I often establish that noe of those indexes are actually used, and simply remove them. Often, this provides the first big step in performance improvement, because it's a bunch of indexes that don't need to be maintained.

    rdebath wrote:
    DenSter wrote:
    Look, I don't agree with what they did, but basically, what they've done is modify the NAV-SQL communication to behave the same way as the C/SIDE DBMS used to work, because that's what everybody is used to.
    Have they actually said that's the intention? I know they forced it with one of the V4 service packs (index hints by default) but my impression overall is that they've been fighting to stop the query optimiser making stupid mistakes. As most of these mistakes are related to the cluster index I suspect getting back the ability to remove it in NAV (say by explicitly setting the primary key to No rather than <No>) would make for some interesting performance changes, after all the clustered index is supposed to be okay for most queries not bad for everything except a tablescan. I think I'll do some testing (by modifying at the SQL level, it's perfectly fine. :wink: )
    Funny :) No they did not say that, they would never say that. If they HAD said that, I would not be able to repeat it, since I am under NDA with them. It's my personal conclusion based on the fact that a lot of modifications that they made to NAV-SQL communication have the odd effect of acting similar to native. Your term "stupid mistakes" is a very subjective description of the optimizer's behavior. Personally I've had a lot of success tweaking indexes and providing SQL Server with indexes that it would actually use.

    rdebath wrote:
    DenSter wrote:
    I wish they would let SQL Server decide on all queries, I wish we could specify which fields.
    Humm, I might have remembered an idea, if we could specify that the order-by should match the sqlindex (including no order-by) the optimiser would probably get less confused. It would need client side sorting, but that's just a temp table, easy ... probably ... damn. (Imagine a sort that's half SQL and half native... but SQL needs something, I suppose we're back to "setcurrrentkey()" with no arguments for unsorted.)

    I don't know what syntax you'd use for specifying the fields though, it really doesn't match the ISAM model of the C/AL code. An, very long, list of field names would look silly and be very error prone. Perhaps a second table record as an argument and the data is loaded into that record then TRANSFERFIELDS() to the record we're actually using. Only the fields that are common actually get fetched from the database.

    BTW: I'm used to SQL2000 not native; you don't have to specify a setcurrentkey at all, SQL be just fine. But right now it feels like the Native DB is better in all respects. :x
    I'm not saying it would be easy, I just wish we COULD specify fields and have the ability to prevent SELECT * queries. If by saying you are used to SQL 2000, you have not worked with 2005 or 2008, then that explains a lot, because a LOT has changed since. I could have sworn though that you've been talking a lot about 2005
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    DenSter wrote:
    ...
    I'm not saying it would be easy, I just wish we COULD specify fields and have the ability to prevent SELECT * queries. If by saying you are used to SQL 2000, you have not worked with 2005 or 2008, then that explains a lot, because a LOT has changed since. I could have sworn though that you've been talking a lot about 2005

    Here is the solution to that:

    Select * FROM and how to do it better in NAV
    David Singleton
  • Options
    rdebathrdebath Member Posts: 383
    DenSter wrote:
    After doing my due dilligence, and running the proper scripts to make sure, I often establish that noe of those indexes are actually used, and simply remove them. Often, this provides the first big step in performance improvement, because it's a bunch of indexes that don't need to be maintained.
    Ah, over-indexing, well it's natural, C/SIDE is an ISAM based language after all. The shame is that Microsoft tried to fix that for the core app in V5.0 and SQL2005 bit them on the arse. Me, too, who would have guessed that those changes were in fact premature optimisation.
    DenSter wrote:
    No they did not say that, they would never say that.
    Okay then, well, I seem to always get that effect if I'm trying to optimise any SQL (not just MS) for the general case (ie stable performance). You avoid complex joins (sometimes any joins), you avoid anything that could cause the optimiser to choose any sort of table scan or 'enhanced' join. You choose an index when you write the code (C/SIDE in this case) and convince the optimiser to use it... BUT don't use index hints because someone might change things later without thinking about it and you need to leave room for the DBA to find a better index in the real world.
    DenSter wrote:
    Your term "stupid mistakes" is a very subjective description
    Oooo, feels the burn, okay, what's this if it isn't a stupid mistake on the part of the optimiser
    I was floored when I saw that an positively non-selective index was forced into the query, on a table with millions of records, but duration was 0ms on thousands and thousands of queries. I made the index more selective and it slowed to a crawl because the key specified in C/AL did not have an index with the same fields.
    Note for once this isn't aimed directly at Microsoft, the algorithms and techniques used to find the 'best' query plan are inherently chaotic and so will always lead to surprises in some cases. I guess, the problem is that they've been trying optimise too hard and managed to optimise specifically for their test data rather than the general.
    DenSter wrote:
    I've had a lot of success tweaking indexes and providing SQL Server with indexes that it would actually use.
    Of course, a natural effect of over optimisation, if you manage to tweak into a pattern that the optimiser recognises correctly you will get superb performance. But, what's this, you have to hand optimise your code for the optimiser ... Okaaay. But, Shirley, if the optimiser were doing a good job you wouldn't be having a lot of success with just "tweaking indexes", you'd have to have special knowledge of the data in this particular database.
    DenSter wrote:
    I'm not saying it would be easy, I just wish we COULD specify fields and have the ability to prevent SELECT * queries.
    Hey, me too! I want to be able to use covering indexes and I don't mean faking them by misusing indexed views!
    DenSter wrote:
    If by saying you are used to SQL 2000, you have not worked with 2005 or 2008, then that explains a lot, because a LOT has changed since. I could have sworn though that you've been talking a lot about 2005
    I'm used to SQL2000 in the same way that you said 'people' are used to the Native database, for me the Native database was a somewhat substandard ISAM (it only has table locking! WTH!) (BTW: _WAS_ I'm actually quite fond of it now ... ghod knows why though.).

    But SQL2000, when it was new, was pretty similar to most other SQL DBs I've used over the years, some strong features (eg automatic statistics) and I only remember one annoying surprise (It can't 'skip' along an index if given an "OR" or "IN" clause.)

    OTOH SQL2005 keeps giving nasty surprises whereas the neat features all appear to be of the class "That looks neat ... I'll have to see if I can use that somewhere ... oh.".

    SQL2008 seems to be more of the same, eg. SQL2008's compression, who hasn't wished that the DB or tables could be compressed and still be safe and usable, except in real life nearly everybody just buys bigger disks or actually starts doing something about that "data archiving" project.

    BTW: I'm mostly using SQL2005 and we will probably be switching, in general, to SQL2008 some time this year. I will still have to keep SQL2000 around, but I think I can eliminate SQL2005 and nothing of value will be lost.
  • Options
    rdebathrdebath Member Posts: 383
    Like it, I though I was close (but I was still in the dumb idea zone of course).

    But, you know that's not just an SQLRecord, but more of a general 'Sub record' or 'structure' . I think it should allow transfer fields, because if you transfer via one of these (ie a subrecord with only the declared fields) it will actually help to make TRANSFERFIELDS a safer feature as the list of fields to be transferred is declared locally but is sufficiently 'global' to be a useful substitution for a tedious "hard coded list of assignments" function.

    I also note that the efficiency of an SQLRecord could depend quite heavily on the structure of the database. This kind of hints that the SQLRecords related to a particular table should be attached to the table object in NAV not scattered around every object that uses the table. If this were the case it should help with the issues you foresee in passing SQLRecords around and perhaps more importantly the SQLRecord could, in theory, be used as the basis for partitioning the table (later that is).

    OTOH: If want to avoid the subrecord idea, it just becomes an SQL field list attached to a normal Record variable. This has the danger with Native that, if the field list is completely ignored, there could easily be significant differences between code running in SQL and in Native.
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    rdebath wrote:
    ... for me the Native database was a somewhat substandard ISAM (it only has table locking! WTH!) (BTW: _WAS_ I'm actually quite fond of it now ... ghod knows why though.).

    ...

    Actually the Native database is based on Optimistic concurrency and its version principle. Which work extremely well if used properly. But its really important to understand how this works to get the most of it. Also don't forget that the database has not been updated in almost 15 years, so if you compare it to what was around in 1995 you will find that it was far from "substandard". The issue is that they stopped development. Some of the technology they experimented with but never released would have made it a real contender even today. But in reality, supporting two databases makes no sense.
    David Singleton
  • Options
    rdebathrdebath Member Posts: 383
    Also don't forget that the database has not been updated in almost 15 years, so if you compare it to what was around in 1995 you will find that it was far from "substandard".
    Unfortunately, in 1995 I was mostly using Informix Online, it would hardly be a fair comparison. At the time anything that wasn't big iron looked rather poor in comparison. In fact back then Online probably had a feature list that would look good in comparison to current software and there are some features I still miss, like the continuous log backup to tape. If you look at the XPS line (which I didn't use) it still makes the current MS-SQL look like a POS, and I'm not talking about shops.

    But yes, the ability to have a consistent view of the database at any time WITHOUT LOCKING is something that really stands out about the Navision Native DB. So, does it make up for the table locks? Not compared to Online, but, still I suspect it wouldn't have taken much more to even it out.

    However, I came across Navision five years later.
  • Options
    AngelKill878AngelKill878 Member Posts: 1
    actually I'm planning to use SQL server but I don't have enough knowledge about it. can you please help me guys with my problem. Its been a long time since I'm using other server and know I just want to try other because I encounter some problems regarding to my server.
Sign In or Register to comment.