Hi guys, I'm comming from a DBA background (more Dev. and reports than physical setups).
I just had a quick look at our server after 3 months in production post Go-Live. Apparently the initial install was made with 32 GB of datafiles. That seems appropriate for the next 2-3 years of data. However after looking at where the data was placed, I realised that about 99% of the space will only be used on the 2nd filegroup (PneusSuperieur_1_Data) rather than primary file group (where each files have 16 GB of space).
As far as I can tell no transactional table is placed on the primary filegroup nor any indexes.
My question is quite simple : is it safe to shrink that file back to let's say 500 MB (just in case)?
I tried asking that question to the company that did the install but they looked like dears caught in the head light.
Anyone can offer recommendations about that primary file about it's size and filegrowth settings?
TIA.
0
Comments
Assuming it's a default install there will be only the MS-SQL system tables in the primary filegroup. The default size and expansion for this are 40Mbytes and 10% this is often more than enough.
NAV is very tolerant of the structure of the database at the files and filegroups level, for normal operation it doesn't care at all, though you may be able to confuse it's manual file expansion tool. If you do your maintenance at the SQL level you won't care. Generally, do what you need to, it'll cope.
For the main file a 16GB start with a 512Mbyte expansion would fit with what you've got already.
TVision Technology Ltd
Yes I think it's safe to assume that it's the default install. Nothing against the guy who did the install, but besides the world of Nav he seems pretty clueless about SQL Server. I completly lost him right after talking about 2 files on the prod database :-$ .
Anyhow I think I'l leave the db alone at 32 Gb for now (2 GB of data with only 10% of the years' sales). So 32 Gb should be more than enough for the first year, then I'l reevaluate after 1 full year in prod.
Thanks for the quick response.
One thing I'll mention is please do not modify or add or remove any indexes in SQL Server directly. If you find that certain indexes are not used, or if you want to modify anything, they should be modified in the NAV table designer. There are some very nasty performance side effects when indexes on SQL Server don't match exactly what is in the C/SIDE table object, or if C/AL code does not match the index exactly.
RIS Plus, LLC
Can you expand on this? I would never dream of deleting or modifying native nav indexes but I've never seen any harm in creating new indexes for my own reports (sql server reporting services) or even order web application.
I've already put defrag jobs which only take care of the big and fragmented indexed with a switch between rebuild and defrag at 30% fragmentation. I've had this running on 2 prod servers for about 2 years now and if anything, the performance is only better. On top of it I've never had any problems with nav doing this.
I know nav dev. can hit the magic "tune sql server" button which simply rebuilds all the indexes in the background but let's just say this is not a good idea to do this in production hours ](*,) .
Moreover native sql server indexes offer a lot more capabilities and options (online, sort in tempdb, included columns, filtered indexes...I'm on 2008 btw). I've found that I can much further tune my queries that way rather than using the Nav gui.
Of course I always stay away from adding unique indexes to make sure I don't break anything.
in simple terms, just like the Nav partner does not know enough about SQL; you don't know enough about Navision. Unfortunately Navision does not use standard queries that you would expect. The reasons are due to maintaining backward compatibility, but that is not significant to this discussion.
with the small size of this database, you shouldn't have any issues, but as the databases get large (100+ gig) it needs someone that knows both Navision AND SQL to keep the system running smoothly.
I am a bit surprised though that the partner made the primary file (which I assume is the mdf file) so large. Not a good sign.
I think I'd disagree with you, but only slightly.
Actually adding indexes is usually okay. BUT, if possible, it best to add the index inside Navision to keep the data dictionaries in synch so that Navision doesn't do something stupid. Deleting indexes is dubious at best though. Modifying an existing index usually pretty safe too as long as you make sure the new index is very similar to the one Nav thinks is there and you're not too worried about NAV reconstructing it. Oddly enough, the only bad effect an extra unique constraint will have is a big nasty error message when you try to violate it.
The real problem with doing any of this sort of modification is that the SQL that NAV generates isn't what Microsoft optimised SQL2005 and later for. The code generated by NAV used several tricks that would make SQL2000 perform nearly as well as the old native database on fixed tasks like posting and unfiltered forms. But these tricks caused the SQL2005 query optimiser to over optimise and remember far too many query plans causing SQL2005 to make some mind numbingly dumb decisions. NAV2009 has mostly worked around these by dumbing down the SQL sent (and using different cursor types) which has mostly cleared the serious problems, but the query optimiser is still rather fragile for uncommon situations.
So, in summary, a good DBA should be okay with it but they need to take care.
TVision Technology Ltd
What is so different about Nav that makes it impossible to a competent DBA to add indexes for OUTSIDE applications that read data straight from SQL?
I am aware of the sql 2000 compatibility (and I've worked as far back as sql 7 all the way to sql 2008 in other ERP) as well as of the differences between sql levels about the indexes.
As far as I know, if I don't create the index in Nav, Nav doesn't know about it so it can't mess with it. Also since I don't build any unique indexes the only hit comes from dml statements which I already tested the impact of.
BTW yes I was refferring to the primary file. I'm assuming he didn't understand that it wasn't going to be used and just went with : that's the way we've always done it so it must be ok.
Ya I've seen the :shock: mountains of query Nav runs for a simple posting (assuming for compatibility or just because they don't know better). I stopped counting after a couple 100s queries to simply post (and not bill) an order with 1 item on it.
Maybe this should be in another discussion, but did any of you have any problems with the fact that Nav uses so much nolock and read uncommited hints? I can't help to think that this can't be good for data integrity since it's not enforced at all in the DB.
SELECT TOP 1 * FROM [Comp$G_L Entry_] WITH (UPDLOCK) ORDER BY [Entry No_] DESC
This locks the last entry in the G/L and prevents any other client getting into a posting routine until this one is finished. So really no other lock, or lack of, actually matters in the slightest for posting and that's just about every lock that matters.
And yes, it kills multi-user.
TVision Technology Ltd
It used to be possible to tweak the heck out of indexes and increase performance dramatically, either by changing the order of the fields in SQLIndex or even the fields themselves, by being very selective in which SIFT levels to maintain, etc. This was in the days that NAV left index selection up to SQL Server. So if you had a piece of code that filtered on Customer number and posting date, but sorted by some compound key with 12 fields, you could get away with just modifying the index and performance would get much better. With the current custor types though, the NAV key, the SETCURRENTKEY, as well as the SQL index must all match. You simply can't tune indexes like you used to anymore, and what you are saying ("Modifying an existing unique index usually pretty safe") is simply not true anymore on the current versions.
Actually with the cursor types that NAV now seems to prefer, it makes a HUGE difference if you 'tweak' indexes. I've seen queries revert to a clustered index scan, even though the SQL index matched the first 5 fields in the ORDER BY clause, but was missing the last 2 fields. Individual queries would have up to 300ms duration because they reverted to clustered index scan. We reverted back to having the NAV key have exactly the same fields as the SQL index, made sure that the C/AL used that particular key, and query performance went down to 0ms.
RIS Plus, LLC
- ALL queries generated by NAV are "SELECT *" queries.
- NAV forces a certain cursor type that forces the specified index. Somehow they've managed to render the query optimizer useless. Basically what this means is that the KEY in NAV (which is really nothing more than a sort order) MUST be identical to the INDEX on SQL Server. Further, the C/AL code that uses this KEY must address ALL fields of the key.
My word of caution was focused on working with NAV, not any other external app. There's nothing wrong with adding a key for an external app, but if you are working on performance problems within NAV, you have to know what you're doing and make sure that the NAV Key, the SQL Index and the SETCURRENTKEY in C/AL (which defines the ORDER BY clause) all match exactly.
RIS Plus, LLC
](*,) WOW that explains a lot. I had my site destroyed (not a single query could go through) for minutes at a time when a Nav Dev was on site and debugging in the live DB. Since my site needs to use almost all the ledgers, no wonder I couldn't get anything through. Now picture that the website has 3000 logins and we do 2+ Million a week in sales... We lost almost 30 minutes in sales in total...
That little dance went on for about 90 minutes untill I has HIS login name on his machine as the deadlock starter with about 20 victims. Now with proof on file I have no problem killing his session and siabling his login from the server... but let's just say he wasn't happy about it
Anybody had to deal with something similar? Any thoughts on the mattter?
I personnally don't use hints at all, but from what you just told me, even nolock wouldn't save me in this spot.
Thanks a million.
Hmm C/al, how the heck did you get a dev license?? Did you cough up the 50 K for dev license just to be able to tune the indexes?
The specified request cannot be executed from current Application Pool.
As I said, It's the switch to SQL2005 that 'caused' the problems, I don't really understand why SQL2005 thinks Navision's SQL is in some way wrong and I've seen non-Navision complaints about SQL2005 that pretty firmly point the finger at the SQL2005 optimiser being very over-optimistic about it's own abilities. However, IMO, for NAV things like the SQL preprocessing and switching to dynamic cursors have improved operation on SQL2005 to the point that you can, carefully, start messing with things again without it falling in a heap.
Anyway, that's where I'd like to leave it I think.
PS: Oops: ("Modifying an existing unique index usually pretty safe") ->
("Modifying an existing non-unique index usually pretty safe")
TVision Technology Ltd
I don't want to leave it there, because you are giving people incomplete information. It is NOT always safe to make changes to indexes, it WILL cause performance problems if you don't look at what works for what versions. If you want to do it right, on current versions (NAV 5.0SP1 and up, SQL Server 2005 and up), when dealing with NAV specific performance problems, you need to leave the SQL Indexes alone. If you can prove that there is a problem, or that you can think of a more efficient index, then you MUST do THREE things:
1 - Make the change in the NAV table object, NOT on SQL Server. Modify the actual key fields, NOT the SQLIndex. The index on SQL Server must match the key in NAV exactly
2 - Make sure that the key change flows through into the SQL Index
3 - Make sure that all C/AL code is specific and explicit, SETCURRENTKEY(<AllFieldsInTheKeyOrder>). This of course comes other problems, such as what fields are filtered, and such.
3a - if you're modifying an existing key, you then also need to a thorough where used analysis and make sure that all related objects are updated to use the new key.
RIS Plus, LLC
It is what it is though, and you have to adapt. I was floored when I saw that an positively non-selective index was forced into the query, on a table with millions of records, but duration was 0ms on thousands and thousands of queries. I made the index more selective and it slowed to a crawl because the key specified in C/AL did not have an index with the same fields.
Say what you will, but once you figure out how it works and what to do about it, it is pretty straightforward to work on, although much too time consuming.
RIS Plus, LLC
RIS Plus, LLC
How much is it and how do I get one?
http://www.microsoft.com/dynamics/en/us ... ecome.aspx
RIS Plus, LLC
Humm, I might have remembered an idea, if we could specify that the order-by should match the sqlindex (including no order-by) the optimiser would probably get less confused. It would need client side sorting, but that's just a temp table, easy ... probably ... damn. (Imagine a sort that's half SQL and half native... but SQL needs something, I suppose we're back to "setcurrrentkey()" with no arguments for unsorted.)
I don't know what syntax you'd use for specifying the fields though, it really doesn't match the ISAM model of the C/AL code. An, very long, list of field names would look silly and be very error prone. Perhaps a second table record as an argument and the data is loaded into that record then TRANSFERFIELDS() to the record we're actually using. Only the fields that are common actually get fetched from the database.
BTW: I'm used to SQL2000 not native; you don't have to specify a setcurrentkey at all, SQL be just fine. But right now it feels like the Native DB is better in all respects. :x
TVision Technology Ltd
Funny No they did not say that, they would never say that. If they HAD said that, I would not be able to repeat it, since I am under NDA with them. It's my personal conclusion based on the fact that a lot of modifications that they made to NAV-SQL communication have the odd effect of acting similar to native. Your term "stupid mistakes" is a very subjective description of the optimizer's behavior. Personally I've had a lot of success tweaking indexes and providing SQL Server with indexes that it would actually use.
I'm not saying it would be easy, I just wish we COULD specify fields and have the ability to prevent SELECT * queries. If by saying you are used to SQL 2000, you have not worked with 2005 or 2008, then that explains a lot, because a LOT has changed since. I could have sworn though that you've been talking a lot about 2005
RIS Plus, LLC
Here is the solution to that:
Select * FROM and how to do it better in NAV
Okay then, well, I seem to always get that effect if I'm trying to optimise any SQL (not just MS) for the general case (ie stable performance). You avoid complex joins (sometimes any joins), you avoid anything that could cause the optimiser to choose any sort of table scan or 'enhanced' join. You choose an index when you write the code (C/SIDE in this case) and convince the optimiser to use it... BUT don't use index hints because someone might change things later without thinking about it and you need to leave room for the DBA to find a better index in the real world.
Oooo, feels the burn, okay, what's this if it isn't a stupid mistake on the part of the optimiser Note for once this isn't aimed directly at Microsoft, the algorithms and techniques used to find the 'best' query plan are inherently chaotic and so will always lead to surprises in some cases. I guess, the problem is that they've been trying optimise too hard and managed to optimise specifically for their test data rather than the general.
Of course, a natural effect of over optimisation, if you manage to tweak into a pattern that the optimiser recognises correctly you will get superb performance. But, what's this, you have to hand optimise your code for the optimiser ... Okaaay. But, Shirley, if the optimiser were doing a good job you wouldn't be having a lot of success with just "tweaking indexes", you'd have to have special knowledge of the data in this particular database.
Hey, me too! I want to be able to use covering indexes and I don't mean faking them by misusing indexed views!
I'm used to SQL2000 in the same way that you said 'people' are used to the Native database, for me the Native database was a somewhat substandard ISAM (it only has table locking! WTH!) (BTW: _WAS_ I'm actually quite fond of it now ... ghod knows why though.).
But SQL2000, when it was new, was pretty similar to most other SQL DBs I've used over the years, some strong features (eg automatic statistics) and I only remember one annoying surprise (It can't 'skip' along an index if given an "OR" or "IN" clause.)
OTOH SQL2005 keeps giving nasty surprises whereas the neat features all appear to be of the class "That looks neat ... I'll have to see if I can use that somewhere ... oh.".
SQL2008 seems to be more of the same, eg. SQL2008's compression, who hasn't wished that the DB or tables could be compressed and still be safe and usable, except in real life nearly everybody just buys bigger disks or actually starts doing something about that "data archiving" project.
BTW: I'm mostly using SQL2005 and we will probably be switching, in general, to SQL2008 some time this year. I will still have to keep SQL2000 around, but I think I can eliminate SQL2005 and nothing of value will be lost.
TVision Technology Ltd
But, you know that's not just an SQLRecord, but more of a general 'Sub record' or 'structure' . I think it should allow transfer fields, because if you transfer via one of these (ie a subrecord with only the declared fields) it will actually help to make TRANSFERFIELDS a safer feature as the list of fields to be transferred is declared locally but is sufficiently 'global' to be a useful substitution for a tedious "hard coded list of assignments" function.
I also note that the efficiency of an SQLRecord could depend quite heavily on the structure of the database. This kind of hints that the SQLRecords related to a particular table should be attached to the table object in NAV not scattered around every object that uses the table. If this were the case it should help with the issues you foresee in passing SQLRecords around and perhaps more importantly the SQLRecord could, in theory, be used as the basis for partitioning the table (later that is).
OTOH: If want to avoid the subrecord idea, it just becomes an SQL field list attached to a normal Record variable. This has the danger with Native that, if the field list is completely ignored, there could easily be significant differences between code running in SQL and in Native.
TVision Technology Ltd
Actually the Native database is based on Optimistic concurrency and its version principle. Which work extremely well if used properly. But its really important to understand how this works to get the most of it. Also don't forget that the database has not been updated in almost 15 years, so if you compare it to what was around in 1995 you will find that it was far from "substandard". The issue is that they stopped development. Some of the technology they experimented with but never released would have made it a real contender even today. But in reality, supporting two databases makes no sense.
But yes, the ability to have a consistent view of the database at any time WITHOUT LOCKING is something that really stands out about the Navision Native DB. So, does it make up for the table locks? Not compared to Online, but, still I suspect it wouldn't have taken much more to even it out.
However, I came across Navision five years later.
TVision Technology Ltd