15k disks support more io/sec thus they can read/write more data/sec. Depending how you plan to split your database I'd probably leave transaction logs or backups on the 10k disks and put data on the 15k disks
It also depends how you plan to use the arrays - files in the same filegroup , or seperate filegroups.
0
Comments
:-k
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
The OS-disk, normally, is used little and the log is sequential, so I think this:
If I have to choose between:
a)
2 in RAID1 for system+temp
2 in RAID1 for DB
2 in RAID1 for log
b)
2 in RAID1 for system+temp+log
4 in RAID10 for DB
=> I prefer b) (I don't know in case of a REAL SQL-DB, but for Navision, this seems better because Navision doesn't use the temp a lot). And most of the time I have seen the DB-disks heavily at work and the system- and log-drives very busy at doing nothing.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
Of course it all depends on "how much" and to what extent that interferes with daily processing. If you have a million dollar machine with 16 disks for OS and log, with only two users, that system will fly. Generally though you want to keep the log separate.
RIS Plus, LLC
You should be aware that SQL server makes greater use of tempdb in 2005 than 2000, I see up to 30% of total transactions in tempdb on our system - mostly it's around 10%.
I'd venture a guess and think that if the log is striped across the disks, by definition you are limited by what the slowest disks can conceivably do, even though the 15K bits were done before the 10K bits were processed.
RIS Plus, LLC
I have to agree 100% here.
an interesting test you can easily try.
I have a NAV SQL system that I use often, that has the LOG file on the OS disk ... My Laptop I had very large task to run on a 50G DB, and to try this out, I moved the LOG file onto an external bus powered 4000rpm USB drive, just so it would be separate from the OS, and the performance difference was immediately noticeable. Even allowing for the time to move the file to the USB drive and back it was worth it.
May be one tip for your laptop ... you can change the recovery model to "simple" so it won't use a TL .
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
:?: but how would that then be a good test to compare putting the Log file on the OS as to putting it on a separate drive?
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
Aha Gotcha.
Actually setting recovery model to SIMPLE is always the first thing I do on a local database. I only did this to test how noticeable the difference was.
The simple recovery model does not disable the transaction log. It is still used and has the same configuration/performance issues as the full recovery model. In pre-SQL 2000 the simple recovery model was called "truncate on checkpoint", which is a more accurate description of waht it does. The transaction log is used, but the entries are removed once they are written to the database.
Thanks for that. When doing the test, I specifically set the model back to Full, since I was not sure.
No I have my system SQL etc on my C drive. The DB I had on an external 7.2k drive and the Log I moved onto a small 4.2k USB powered drive. So three spindles in all.
I also want to redo the experiment putting the Log on an external FireWire drive, so the DB and Log are not sharing the same physical connection.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
:oops:
Thanks, bbrown. That's indeed a more accurate description.
But then ... when the TL is truncating at checkpoint ... then putting it on a seperate disk (when recovery model is "simple") isn't that interesting anymore, is it?
Off course, it's still faster than putting on OS disk, but the main reason is gone since the disk's head is moving around while truncating... .
Agree? :-k
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
Not really.
If there are other processes putting i/o demands on the disk, then the heads may not be in proper position during large write operations as other processes are constantly moving them. When a user issues an update to SQL, it is kept in memory and written to the transaction log before the user is released. The checkpoint process writes the data from memory to the data file(s) as a background process. This is why transaction log performance is an important factor in OLTP systems.
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog
I'd still not mix spindle speeds in an array.
RIS Plus, LLC
Eric Wauters
MVP - Microsoft Dynamics NAV
My blog