Options

1 x RAID1 RPM10000 + 2 x RAID1 RPM15000:SQL

colin_leversuch-robertscolin_leversuch-roberts Member Posts: 113
edited 2007-08-08 in SQL Performance
15k disks support more io/sec thus they can read/write more data/sec. Depending how you plan to split your database I'd probably leave transaction logs or backups on the 10k disks and put data on the 15k disks
It also depends how you plan to use the arrays - files in the same filegroup , or seperate filegroups.

Comments

  • Options
    WaldoWaldo Member Posts: 3,412
    Personally, I have my doubts that putting the log and OS on one disk is a good idea ... .

    :-k

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Options
    krikikriki Member, Moderator Posts: 9,089
    Waldo wrote:
    Personally, I have my doubts that putting the log and OS on one disk is a good idea ... .

    :-k
    Actually, it isn't. But it is worse to put it together with the DB!
    The OS-disk, normally, is used little and the log is sequential, so I think this:
    If I have to choose between:
    a)
    2 in RAID1 for system+temp
    2 in RAID1 for DB
    2 in RAID1 for log

    b)
    2 in RAID1 for system+temp+log
    4 in RAID10 for DB

    => I prefer b) (I don't know in case of a REAL SQL-DB, but for Navision, this seems better because Navision doesn't use the temp a lot). And most of the time I have seen the DB-disks heavily at work and the system- and log-drives very busy at doing nothing.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    WaldoWaldo Member Posts: 3,412
    We should be able to benchmark this :wink: to know for sure.

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Options
    DenSterDenSter Member Posts: 8,304
    No you really don't want the log on the OS disk, becuase the OS and programs and all the other crap cause disk activity. The reason for dedicating a disk (array) to the log is so that the disk activity is for writing to the log only. The sequential write operations make it so the disk heads don't have to move back and forth to find a spot to write the next bit, and by introducing disk activity for other ptocesses, you make the disk heads move around all the time.

    Of course it all depends on "how much" and to what extent that interferes with daily processing. If you have a million dollar machine with 16 disks for OS and log, with only two users, that system will fly. Generally though you want to keep the log separate.
  • Options
    agreed, keeping the transaction log on seperate dedicated spindles is the best way to go - however, and I agree again, it's all about throughput and usage - I've often run test systems across one large raid 5 ( sadly ) , 12 or 14 disks in one array and then carved into logical drives - you'd be quite surprised on the available performance !!
    You should be aware that SQL server makes greater use of tempdb in 2005 than 2000, I see up to 30% of total transactions in tempdb on our system - mostly it's around 10%.
  • Options
    DenSterDenSter Member Posts: 8,304
    So Colin... given some 10K drives and some 15K drives in the same disk array for say the logs. Do you have any information about whether that array will perform in total as well as its weakest link (i.e. all at 10K performance levels) or do all disks perform at their individual performance level?

    I'd venture a guess and think that if the log is striped across the disks, by definition you are limited by what the slowest disks can conceivably do, even though the 15K bits were done before the 10K bits were processed.
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    DenSter wrote:
    No you really don't want the log on the OS disk, ...

    I have to agree 100% here.

    an interesting test you can easily try.

    I have a NAV SQL system that I use often, that has the LOG file on the OS disk ... My Laptop :D I had very large task to run on a 50G DB, and to try this out, I moved the LOG file onto an external bus powered 4000rpm USB drive, just so it would be separate from the OS, and the performance difference was immediately noticeable. Even allowing for the time to move the file to the USB drive and back it was worth it.
    David Singleton
  • Options
    WaldoWaldo Member Posts: 3,412
    That's a good test .

    May be one tip for your laptop ... you can change the recovery model to "simple" so it won't use a TL :wink:.

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    Waldo wrote:
    That's a good test .

    May be one tip for your laptop ... you can change the recovery model to "simple" so it won't use a TL :wink:.

    :?: but how would that then be a good test to compare putting the Log file on the OS as to putting it on a separate drive? :mrgreen:
    David Singleton
  • Options
    WaldoWaldo Member Posts: 3,412
    Yes, but I suppose you don't want to "test" all the time :wink:

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    Waldo wrote:
    Yes, but I suppose you don't want to "test" all the time :wink:

    Aha Gotcha. :wink:

    Actually setting recovery model to SIMPLE is always the first thing I do on a local database. I only did this to test how noticeable the difference was.
    David Singleton
  • Options
    I've never mixed spindle speeds in an array, I've mixed complete arrays of different spindle speeds. If you mix different scsi disks on the same bus then they auto configure to the slowest speed - but that's not quite the same, fibre channel does the same. I'd possibly think mixing spindle speeds in the same array would be a bad idea, I have some test arrays at home but they all have 10k disks so I can't do a test - and if I did I'm not sure how I'd test or what I'd look for - just think it might be a bad idea to mix them in the same array.
  • Options
    bbrownbbrown Member Posts: 3,268
    Waldo wrote:
    That's a good test .

    May be one tip for your laptop ... you can change the recovery model to "simple" so it won't use a TL :wink:.

    The simple recovery model does not disable the transaction log. It is still used and has the same configuration/performance issues as the full recovery model. In pre-SQL 2000 the simple recovery model was called "truncate on checkpoint", which is a more accurate description of waht it does. The transaction log is used, but the entries are removed once they are written to the database.
    There are no bugs - only undocumented features.
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    bbrown wrote:
    The simple recovery model does not disable the transaction log. It is still used and has the same configuration/performance issues as the full recovery model. In pre-SQL 2000 the simple recovery model was called "truncate on checkpoint", which is a more accurate description of waht it does. The transaction log is used, but the entries are removed once they are written to the database.

    Thanks for that. When doing the test, I specifically set the model back to Full, since I was not sure.
    David Singleton
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    kriki wrote:
    I have a NAV SQL system that I use often, that has the LOG file on the OS disk ... My Laptop :D I had very large task to run on a 50G DB, and to try this out, I moved the LOG file onto an external bus powered 4000rpm USB drive, just so it would be separate from the OS, and the performance difference was immediately noticeable. Even allowing for the time to move the file to the USB drive and back it was worth it.
    Ok, but here (I suppose) you had the system, the DB, the log on the same physical drive. I was talking about the ONLY the system and the log on 1 drive and the DB on another.

    No I have my system SQL etc on my C drive. The DB I had on an external 7.2k drive and the Log I moved onto a small 4.2k USB powered drive. So three spindles in all.

    I also want to redo the experiment putting the Log on an external FireWire drive, so the DB and Log are not sharing the same physical connection.
    David Singleton
  • Options
    krikikriki Member, Moderator Posts: 9,089
    Split the Topic into 2 topics: 1 for SQL (this topic) and 1 for native : 1 x RAID1 RPM10000 + 2 x RAID1 RPM15000:Native
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    WaldoWaldo Member Posts: 3,412
    bbrown wrote:
    Waldo wrote:
    That's a good test .

    May be one tip for your laptop ... you can change the recovery model to "simple" so it won't use a TL :wink:.

    The simple recovery model does not disable the transaction log. It is still used and has the same configuration/performance issues as the full recovery model. In pre-SQL 2000 the simple recovery model was called "truncate on checkpoint", which is a more accurate description of waht it does. The transaction log is used, but the entries are removed once they are written to the database.

    :oops:
    Thanks, bbrown. That's indeed a more accurate description.

    But then ... when the TL is truncating at checkpoint ... then putting it on a seperate disk (when recovery model is "simple") isn't that interesting anymore, is it?
    Off course, it's still faster than putting on OS disk, but the main reason is gone since the disk's head is moving around while truncating... .

    Agree? :-k

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Options
    bbrownbbrown Member Posts: 3,268
    Waldo wrote:
    bbrown wrote:
    Waldo wrote:
    That's a good test .

    May be one tip for your laptop ... you can change the recovery model to "simple" so it won't use a TL :wink:.

    The simple recovery model does not disable the transaction log. It is still used and has the same configuration/performance issues as the full recovery model. In pre-SQL 2000 the simple recovery model was called "truncate on checkpoint", which is a more accurate description of waht it does. The transaction log is used, but the entries are removed once they are written to the database.

    :oops:
    Thanks, bbrown. That's indeed a more accurate description.

    But then ... when the TL is truncating at checkpoint ... then putting it on a seperate disk (when recovery model is "simple") isn't that interesting anymore, is it?
    Off course, it's still faster than putting on OS disk, but the main reason is gone since the disk's head is moving around while truncating... .

    Agree? :-k

    Not really.

    If there are other processes putting i/o demands on the disk, then the heads may not be in proper position during large write operations as other processes are constantly moving them. When a user issues an update to SQL, it is kept in memory and written to the transaction log before the user is released. The checkpoint process writes the data from memory to the data file(s) as a background process. This is why transaction log performance is an important factor in OLTP systems.
    There are no bugs - only undocumented features.
  • Options
    WaldoWaldo Member Posts: 3,412
    Thanks, man!

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
  • Options
    you can't really compare a laptop and a server - not really the same. The main reason for keeping the transaction log apart is that generally it only does sequential writes, whereas the data drive may do a combination of random or sequential reads/writes. However, in a server the controller cards ( for a scsi system ) will likely have cache and be certified against data loss, your actual data will probably also be in cache thus negating physical io on the data drive. Best practice says keep them apart, but sometimes you don't get that because of the purchasing decisions.
    I'd still not mix spindle speeds in an array.
  • Options
    DenSterDenSter Member Posts: 8,304
    Waldo wrote:
    That's a good test .

    May be one tip for your laptop ... you can change the recovery model to "simple" so it won't use a TL :wink:.
    No that's not true. Simple Recovery model DOES use TL, it just doesn't SAVE the transaction log. SQL Server cannot function without the transaction log, in fact, SQL Server relies on transaction log, it doesn't even commit any data to the database until it gets a green light from the transaction log, regardless of what the recovery model is.
  • Options
    WaldoWaldo Member Posts: 3,412
    Thanks for repeating that, Denster :|

    Eric Wauters
    MVP - Microsoft Dynamics NAV
    My blog
Sign In or Register to comment.