We run Nav 4.0 on SQL 2005. The db is 48GB. The backup is put on a separate disk from the data. The backup has not completed for a few weeks because there is not enough room on the backup disk. The log file on the backup disk is 11GB. The backup runs to 90% complete then errors out. I do run a separate Nav backup to a tape as a backup to the backup so I do have a backup to use in a worst case scenario.
The recovery mode is simple.
My main question is what would happen if I go into the db Shrink function and shrink the log file? I would use the reorganize pages mode and select 1GB to shrink it to. I would do this at night with no users connected.
I did this on an old db I had and set the shrink size to 100MB. The finish size was 600MB. The original size of the log file was 2.5GB. SQL will only shrink the log file to whatever it can despite what I put in the shrink to field. Is that correct?
Thanks in advance.
0
Comments
You could shrink the log file down to a more manageable size (maybe 11GB is a bit too much), but be careful that you leave enough to hold all transactions between backup periods, so that the file never start growing again, and fragmenting along the way.
By the way, if your backup fails due to not having enough drive space, and you're asking about shrinking the log file, you must be saving the backup file on the same drive as the log file. If that is the case, and the log drive fails, you also lose the backup file itself and you won't be able to recreate the database anyway.
RIS Plus, LLC
Of course, best solution is to have transaction log on separate disks and have another disks for backups...
MVP - Dynamics NAV
My BLOG
NAVERTICA a.s.
The best thing you can do is leave the log file alone.
RIS Plus, LLC
You use the simple rec. model. Mhm, ok it's the easiest model. The SQL Server maintains only a minimal amount of information in the transaction log. The Server truncates the transaction log each time the database reaches a transaction checkpoint, leaving no log entries for disaster recovery purposes. But with this it is not possible to restore such a database to a given point of time, you may only restore it to the exact time when a full backup was created. Therefore, you will automatically lose any data modifications, created between the time of the full backup and the time of the failure. (for example between 07:00 backup and 17:00 Chrash)
On this reason,for example if you have a high amount of transactions, we use the Full Recovery Model. With this model, SQL Server preserves the transaction log until you back it up. This allows you to design a disaster recovery plan. The brilliant on this invention is, that it includes a combination of full and differential database backups in conjunction with transaction log backups. ;-)
So, you the most flexibility restoring databases using the full recovery model. So and with the storing of data modifications in the transaction log, the full recovery model allows you to restore a database to a specific point in time. Thats all is depended on your backup cycle (for example every 10min TransActLogBackup, 1x or 2x per Day a full Backup)
For truncate the log u can use following:
[code]BACKUP LOG YourDataBaseName WITH TRUNCATE_ONLY;[code]
The different between truncate and shirink is, that Log truncation does remove transactions from the log file, but it doesn’t actually reduce the amount of space reserved for the file.
Regards
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
All the points the guys made above are valid and if you should have transaction logging you should get sql to generate them on separate disks and, if its really important, replicate them to an external box during the day.
The commands below are what I use to truncate a log in a case like this - there is no implication doing this on a live system in simple restore mode. I've reduced the ldf file on live dbs dozens of times without implication - in every case though the db has been in (or switched to) simple restore mode.
Paste this into the sql2000 query analyzer window and press ctrl-shift-m, type the db name (if you are not sure select it and execute 'exec sp_helpfile')
What you need to understand is that SQL Server does absolutely NOTHING without a transaction log. Everything it does depends on the performance of your transaction log. Regardles of the recovery model, it DOES write ALL transactions to the log file, it just sets a checkpoint after each transaction. If you were to read a log file from a database with full recovery model and one with simple recovery model, they would be almost identical, only with the simple one you'd see truncate points after each transaction.
Look, you MUST have a well performing transaction log, no matter what your recovery model. Just because you have simple recovery model, you STILL need to have a fast drive to support it, because SQL Server will not move on to the next step in the transaction until the transaction log says it's ok. Many people make the mistake of thinking that because it's simple recovery model we can just dump the log file someplace unimportant. BUT for every recovery model it is vitally important that you take good care of the transaction log.
RIS Plus, LLC
When a user makes a change to the database (insert, modify, delete, etc.), the change is immediately written to the transaction log and also held in memory. Once the change is written to the transaction log, the user’s connection is released and they may continue with the next task. SQL then takes the changes held in memory and writes them to the disk. The transaction log is then marked to identify the changes that have been flushed to the disk. This is known as a checkpoint.
When a server is restarted following a failure, anything in the transaction log past the last checkpoint will be the changes that were sitting in memory when the system failed. SQL Server will write these changes to the disk as part of the recovery process performed during system startup.