Hi,
I need to restore a C/Side backup for a rather large db (210+GB in C/Side) into SQL (client switching to SQL)
As you may imagine it takes a while and takes a lot of space for the log.
I was thinking of creating the db, then change the log model to simple, restore the backup and then turn the log model back to full.
The reasons to do that would be:
1. Takes less space since I would not get a huge log.
2. Would take less time since it does not have to maintain a full log.
Am I correct in my assumptions? And, considering that this will be the live client db once it is done, does anyone know whether there are any potential issuses with playing with the log model that way?
Thanks.
Apathy is on the rise but nobody seems to care.
0
Comments
Basically that's the correct way to do it. But the log is still going to get very big. In my experience the log gets to about double the size of the Database during a restore, of course you can then shrink it after, though best is to put it on multiple drives for the restore, and then after restore, remove the additional un required parts.
checkpoint
in management studio or QA against that database, it will usually clear data from the log. Your other option is to create a backup log job that overwrites every time and runs every 2 or 3 minutes. It's possible that microsoft will remove the simple recovery model for databases soon.
What is worth noting here is that transaction log growths do not use instant initialisation and allowing your log to auto grow during a build is probably the most lengthy part of the business. You may gain really significant performance gains, i.e lots less time, if you set the t log to 1 gb before you start - set the growth to 500gb.
When you want to sort out your log file after the load you may suffer from excessive VLF's which can hurt performance. the best way to shrink your log is not through the GUI but by using the command
dbcc shrinkfile(fileno) - use a sp_helpfile to see your file number. Make sure you've run the commands dbcc updateusage(0) and checkpoint against the database first before you try to shrink the log.
Now size your log and set a sensible growth figure to avoid fragmentation.
Put your database back to full recovery and make a full backup.
Yes this one is huge, which is why I always set the log file to 2x the db size initially.
By the way, a question about that. In a development environment, I always create a fairly small log file, then add a second log file. Instead of expanding the first log file. The reason being is that its then easier to shrink, but truncating, and then telling SQL to dump the second log.
If I create one huge log file I find it really complex to then shrink it.
On our last try on admittedly not the best server it took about 48hours to restore the whole damn thing (5.00).
I am not so concerned with the space taken though that would be nice if it were smaller.
I guess I forgot about the auto expand of the log. I probably will make it 400GB to start with.
I guess my only option then is to disable all but the primary keys and restore it that way. (In case you are wondering why that would help, in C/Side all keys are enabled but in my SQL version not all of them are so I guess it is less work when you do enable the secondary ones).
P.S. I usually use : "BACKUP LOG DB_NAME WITH TRUNCATE_ONLY" for log management.
This dumps the content of the log and then I manually go in Management Studio and shrink it to whatever size I want.
There are times when it does not seem to work but hadn't had as much of that in SQL 2005 as I had in SQL 2000 (the only way if it fails is to actually backup the log and then shrink).
Thats the same issue I had, even though the Log would report being virtually empty, it would not shrink. So you are saying that if the Log wouldn't shrink, then doing a real backup allows it to then shrink. Thanks thats what I was missing.