Hi expert, I have a question on the database size. Our database is running in SQL server. Currently the usage of the database is 60%. Even though I searched the forum and read the menu, I am little timid of expanding the database size.
The database files for MDF is 1200 and file growth is 10% and for NDF is 3766 and it has the same file growth as the MDF. I regularly optimize the tables.Could someone advise? Appreciate it!
0
Comments
http://www.BiloBeauty.com
http://www.autismspeaks.org
60% is ok. My general rule is to keep it under 80%.
Shrinking is something I only do on development servers. On production ONLY if the data has shrunk a lot and I know that the free space will never be reused.
Do not use filegrowth in %, but use a fixed size (I generally use 50MB). BUT automatic filegrowth should ONLY be used as a failsafe. Not as something to be relied upon. Growing a database file should be done manually at a moment it can be done (and preferably more than just 50MB). After growing a DB, check if the file is still in 1 piece otherwise defrag for max performance.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
AG should just happen as some kind of emergency solution, e.g. adding a chunk to the db-files to continue working.
If AG happens too often, the db files fragment physically, and additionall the Log fragements logically, means an overhead of VLogs is created.
IF Autogrowth happens, THEN it should be preferred to add a fixed size chunk; not progressive. Adding something to a database file (mdf/ndf) should be a matter of split-seconds, as only information in the "File Allocation Table" is updated; nothing really happens to the physical file. Thus, it should be failsafe to add a decent chunk, e.g. 500MB to 1000MB.
Adding a chunk to the log (ldf) will indeed result in an initialisation of the added part; hence, depending on the chunk size this couold take a while longer (I'd say 10 to 30 seconds per 1000MB).
Of course, all this depends on the technical specifications of the disk-subsystem!
So I suggest to create the db files as large as possible, as sufficient as possible. The log file should be sized to the estimated maximum (which e.g. also depends on REcovery Model and backup frequency).
Thus, if the db-files are sufficiently sized, I prefer to have a job which monitors the fill-degree of these files (see below, this is part of the "NAV/SQL Performance Toolbox" 8) ).
Once the amount of free space drops below 20% it forces an error, which will send a notification to an Operator; thus, a human being could take care about file sizes etc.: As mentioned before, 20% free space is sufficient best-practices. Actually SQL Server needs this free space to re-sort indexes in case of index maintenance (ALTER INDEX): it writes the index into the free space and writes it back defragmented in optimized order (this is similar like defragmenting a hard-disk-drive).
[This only happens if not using the "Sort in tempdb" option]
So basically one could calculate exactly the amount of free space really required:
it's 1.5 times the size of the largest index (the script above contains some commented-out lines which would do this).
For example, if the largest index is 10GB, then you'll need 15GB free space to defragment this index completely. Of course, this is a worst case scenario, so I reccommend to stick to the 20% best-practice rule.
NEVER USE AUTOSHRINK (in a productive system)! Together with "Auto Growth" this would be nothing but a "disk-benchmark". Also, AutoShrink could re-sort indexes upside-down, thus causing additional fragmentation.
Cheers,
Jörg
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool