Log File Shrink ramifications

jrpuclajrpucla Member Posts: 5
edited 2008-09-24 in SQL General
We run Nav 4.0 on SQL 2005. The db is 48GB. The backup is put on a separate disk from the data. The backup has not completed for a few weeks because there is not enough room on the backup disk. The log file on the backup disk is 11GB. The backup runs to 90% complete then errors out. I do run a separate Nav backup to a tape as a backup to the backup so I do have a backup to use in a worst case scenario.

The recovery mode is simple.

My main question is what would happen if I go into the db Shrink function and shrink the log file? I would use the reorganize pages mode and select 1GB to shrink it to. I would do this at night with no users connected.

I did this on an old db I had and set the shrink size to 100MB. The finish size was 600MB. The original size of the log file was 2.5GB. SQL will only shrink the log file to whatever it can despite what I put in the shrink to field. Is that correct?

Thanks in advance.

Comments

  • DenSterDenSter Member Posts: 8,307
    Shrinking your log file is only going to cause fragmentation. It's not the file size that matters, it's how much of it is in use. If you have this log file with a stable value, and it never expands, then you should just leave the log file alone, it is not hurting anything. If it's still growing then you need to truncate the log, not shrink it. Truncating the log will remove all active transactions from the log file, and make room for new transactions.

    You could shrink the log file down to a more manageable size (maybe 11GB is a bit too much), but be careful that you leave enough to hold all transactions between backup periods, so that the file never start growing again, and fragmenting along the way.

    By the way, if your backup fails due to not having enough drive space, and you're asking about shrinking the log file, you must be saving the backup file on the same drive as the log file. If that is the case, and the log drive fails, you also lose the backup file itself and you won't be able to recreate the database anyway.
  • kinekine Member Posts: 12,562
    Yes, you can shrink the log file to "0" if you want, it will grow as needed after that because you are using simple recovery model...

    Of course, best solution is to have transaction log on separate disks and have another disks for backups...
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • DenSterDenSter Member Posts: 8,307
    And then you shrink it again, and then it grows again automatically, and then you shrink it again and then it grows again automatically. In the end you will have a severely fragmented log file, which will have terrible performance.

    The best thing you can do is leave the log file alone.
  • garakgarak Member Posts: 3,263
    edited 2008-09-24
    You can shrink it (after a backup) but why :?: After shrinking it grows again ..... But your problem is now that you can't make a backup because you have not enought space on your disk. So, temp. it's ok that you shrink the file. So you have enought space an your disk.

    You use the simple rec. model. Mhm, ok it's the easiest model. The SQL Server maintains only a minimal amount of information in the transaction log. The Server truncates the transaction log each time the database reaches a transaction checkpoint, leaving no log entries for disaster recovery purposes. But with this it is not possible to restore such a database to a given point of time, you may only restore it to the exact time when a full backup was created. Therefore, you will automatically lose any data modifications, created between the time of the full backup and the time of the failure. (for example between 07:00 backup and 17:00 Chrash)

    On this reason,for example if you have a high amount of transactions, we use the Full Recovery Model. With this model, SQL Server preserves the transaction log until you back it up. This allows you to design a disaster recovery plan. The brilliant on this invention is, that it includes a combination of full and differential database backups in conjunction with transaction log backups. ;-)
    So, you the most flexibility restoring databases using the full recovery model. So and with the storing of data modifications in the transaction log, the full recovery model allows you to restore a database to a specific point in time. Thats all is depended on your backup cycle (for example every 10min TransActLogBackup, 1x or 2x per Day a full Backup)

    For truncate the log u can use following:

    [code]BACKUP LOG YourDataBaseName WITH TRUNCATE_ONLY;[code]

    The different between truncate and shirink is, that Log truncation does remove transactions from the log file, but it doesn’t actually reduce the amount of space reserved for the file.

    Regards
    Do you make it right, it works too!
  • krikikriki Member, Moderator Posts: 9,115
    [Topic moved from 'NAV/Navision' forum to 'SQL General' forum]
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • imurphyimurphy Member Posts: 308
    If the recovery mode is simple then its not generating transaction logs. This db was previously set to another mode which let the db grow. Someone then changed it and the ldf file has remained the size it was. Simple means 'if the machine dies we'll use yesterdays complete backup' - no applying transaction logs, nothing clever - restore full backup.

    All the points the guys made above are valid and if you should have transaction logging you should get sql to generate them on separate disks and, if its really important, replicate them to an external box during the day.

    The commands below are what I use to truncate a log in a case like this - there is no implication doing this on a live system in simple restore mode. I've reduced the ldf file on live dbs dozens of times without implication - in every case though the db has been in (or switched to) simple restore mode.

    Paste this into the sql2000 query analyzer window and press ctrl-shift-m, type the db name (if you are not sure select it and execute 'exec sp_helpfile')
    USE <DbName, varchar(20),>
    EXEC sp_helpfile
     
    BACKUP LOG <DbName, varchar(20),> WITH TRUNCATE_ONLY
    DBCC SHRINKFILE ( <DbName, varchar(20),>_log )
    BACKUP LOG <DbName, varchar(20),> WITH TRUNCATE_ONLY
    DBCC SHRINKFILE ( <DbName, varchar(20),>_log )
    EXEC sp_helpfile
    
  • bbrownbbrown Member Posts: 3,268
    Databases that are in SIMPLE Recovery mode still generate transaction logs. Simple recovery mode simply auto-truncates. This mode was called "Truncate on CheckPoint" in earlier versions. A much more descriptive name for what it does.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,307
    garak wrote:
    it's ok that you shrink the file.
    No it is NOT ok to shrink the log file. It might not be catastrophic if you only do it once, but it is most certainly NOT good practice. This will cause fragmentation of the log file, because it will automatically grow, and then you shrink it again, and then it grows. Fragmented log files are performance killers. Keep your log file at a stable size and leave it alone.

    What you need to understand is that SQL Server does absolutely NOTHING without a transaction log. Everything it does depends on the performance of your transaction log. Regardles of the recovery model, it DOES write ALL transactions to the log file, it just sets a checkpoint after each transaction. If you were to read a log file from a database with full recovery model and one with simple recovery model, they would be almost identical, only with the simple one you'd see truncate points after each transaction.

    Look, you MUST have a well performing transaction log, no matter what your recovery model. Just because you have simple recovery model, you STILL need to have a fast drive to support it, because SQL Server will not move on to the next step in the transaction until the transaction log says it's ok. Many people make the mistake of thinking that because it's simple recovery model we can just dump the log file someplace unimportant. BUT for every recovery model it is vitally important that you take good care of the transaction log.
  • bbrownbbrown Member Posts: 3,268
    How SQL Server writes changes to the database

    When a user makes a change to the database (insert, modify, delete, etc.), the change is immediately written to the transaction log and also held in memory. Once the change is written to the transaction log, the user’s connection is released and they may continue with the next task. SQL then takes the changes held in memory and writes them to the disk. The transaction log is then marked to identify the changes that have been flushed to the disk. This is known as a checkpoint.

    When a server is restarted following a failure, anything in the transaction log past the last checkpoint will be the changes that were sitting in memory when the system failed. SQL Server will write these changes to the disk as part of the recovery process performed during system startup.
    There are no bugs - only undocumented features.
Sign In or Register to comment.