Setup SQL 2008 Disks HELP!!!!

SteveSteve Member Posts: 81
edited 2013-04-12 in SQL Performance
Not sure if anyone has an opinion of the best performance practice when setting up SQL 2008 servers to host Nav 2009r2

Setup Navision to write several independent files within the

Navision \ alter \ database files

d:\data1
d:\data2
d:\data3
d:\data4

D: Drive (Navision SQL Data)
• xxx GB RAID 10 (4 x xxxGB, U320 SCSI, 15K RPM)

e:\log1
e:\log2
– E: Drive (Navision SQL Log)
• xxx GB RAID 1 (2 x xxxGB, U320 SCSI, 15K RPM)

OR

Navision \ alter \ database files

D:\data1
E:\data2
F:\data3
G:\data4

D: Drive (Navision SQL Data)
• xxx GB RAID 10 (2 x xxxGB, 15K RPM)
– E: Drive (Navision SQL Data)
• xxx GB RAID 10 (2 x xxxGB, 15K RPM)
– F: Drive (Navision SQL Data)
• xxx GB RAID 10 (2 x xxxGB, , 15K RPM)
– G: Drive (Navision SQL Data)
• xxx GB RAID 10 (2 x xxxGB, 15K RPM)

Then split up log as well.
– H: Drive (Navision SQL Log)
• xxx GB RAID 1 (2 x xxxGB, 15K RPM)
– I: Drive (Navision SQL Log)
• xxx GB RAID 1 (2 x xxxGB, 15K RPM)



Our goal is to maximize read / write speed as our Navision is incredibly slow.

Any opinions, experience or recommendations would be appreciated.

Thanks
Steve
Steve

Comments

  • David_SingletonDavid_Singleton Member Posts: 5,479
    How big is the db? How many users? How many transactions per day? By slow what do you mean, what is slow? Is blocking the issue? Deadlocks? Network? Citrix/Remote Desktop? CPUs RAM.

    Many more variables to address than just hard disks.
    David Singleton
  • David_SingletonDavid_Singleton Member Posts: 5,479
    Why do you want to split up the database?

    Don't put the log on multiple arrays, it only writes to one at a time, so its a waste.
    David Singleton
  • bbrownbbrown Member Posts: 3,268
    If you really are running U320 SCSI disk, I raise the question whether it makes sense to run a production NAV system on such an old server. It would probably cost more to retain service contracts that to replace with a newer system using faster SAS technology.

    That point aside, I agree with David's comments. Lots more to look at before splitting data files.
    There are no bugs - only undocumented features.
  • SteveSteve Member Posts: 81
    1. Sorry the drives are H700 SCSI drives
    2. Thanks good point regarding the log files.
    3. DB 80 gig, 60 users, 5k transactions (mixed , sales, purch, whse, etc.)
    4. No block or lock
    5. Slow-- just read time on lists, posting orders ... Just in general slowness.
    ----> Ive seen hundreds of other navision db's and just by my own perception this one run slower than others that handle more volumne


    The idea was given were purchasing new hardware to purchase way above what we feel we need today as our business is growing 30%-50% yearly.

    We had a SQL consultant (specializes in NAvi) on site however his recemendation was do nothing. "It appeared to run fine based on his expereince", but he couldn't explain why a simple list with no code behind the form took an very long time to load.

    Tks
    Steve
  • kapamaroukapamarou Member Posts: 1,152
    Simple lists too slow is strange. I've seen this happen for various reasons. Too many flowfields, wrong indexes, tables not optimized, bugs in client executables. You need to start investigating from the bottom and up. Are you optimizing your tables regularly?
  • strykstryk Member Posts: 645
    Well, splitting up the Data into multiple files of the same Filegroup is not necessarily feasible or benefitial. SQL Server does not perform a real Load Balancing, the files will be filled according some "Round Robin" principle, where SQL Server will keep each object in one file.

    E.g. the Clustered Index of "Item Ledger Entry" will always be in the same file; hence, if you have pressure on ILE you will have pressure on that file.

    The idea of such a split is to balance the physical I/O; with "Round Robin" the benefit/impact hardly predictable. In most NAV scenarios it is feasible to leave the db into one data file (Data Filegroup 1) on do the I/O management via sufficiently sized RAID volumes (increasing the stripes); e.g. physical load balancing instead of logical balancing.

    On large database, with large tables indeed it could be smart to segregate certain affected tables; then it is necessary to create separate filegroups and place the files on separate disk-volumes. Here there could be plenty of strategies; e.g. separating Non Clustered Indexes, or Hot Tables, or LEdger Entries, or Archive data etc. - depends and must be discussed thoroughly.

    IMHO I would not split an 80 GB database; if you have one RAID10 of maybe 6 to 8 spindles for the data-file and one RAID 10 of 4 spindles for the log should be sufficient enough.

    Generally, when it is about sizing a SQL Server, you should hve in mind the Transaction volume; e.g. the amount of data moved around every day. In a 80 GB database surely not the whole 80 GB are moved around everytime. Most important is the amout of RAM - the more SQL Server could cache, the smaller will be the pressure on the disks.

    After all, how do you know it is the disk-subsystem causing trouble? What about expensive queries, indexing, C/AL programming etc.?
    There could be a gazillion other reasons fir degraded performance ...
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • bbrownbbrown Member Posts: 3,268
    stryk wrote:
    Well, splitting up the Data into multiple files of the same Filegroup is not necessarily feasible or benefitial. SQL Server does not perform a real Load Balancing, the files will be filled according some "Round Robin" principle, where SQL Server will keep each object in one file....

    I'm going to disagree with you on this, based on experience. This question came up once before on this forum, so I decided to do a little test on a system I have access to. My post on that should be out there somewhere.

    Anyways, this particular system has 3 NDF files across separate arrays. The test I conducted was to delete all the records from a single large table and then check the change in "space used" across all 3 data files. If the object was stored in a single file, then that one file should show all of the change. This is not what I found. The decrease is used space was fairly even across all 3 files. I would think that woudl indicate that SQL does truely stripe the data across all files in the filegroup.
    There are no bugs - only undocumented features.
  • bbrownbbrown Member Posts: 3,268
    viewtopic.php?f=33&t=45080&hilit=+filegroup

    You'll find the post on my original experiment here.
    There are no bugs - only undocumented features.
  • bbrownbbrown Member Posts: 3,268
    BTW - I still agree that splitting the files would be way, way , way down on the list of tuning steps.
    There are no bugs - only undocumented features.
  • strykstryk Member Posts: 645
    bbrown wrote:
    stryk wrote:
    Well, splitting up the Data into multiple files of the same Filegroup is not necessarily feasible or benefitial. SQL Server does not perform a real Load Balancing, the files will be filled according some "Round Robin" principle, where SQL Server will keep each object in one file....

    I'm going to disagree with you on this, based on experience. This question came up once before on this forum, so I decided to do a little test on a system I have access to. My post on that should be out there somewhere.

    Anyways, this particular system has 3 NDF files across separate arrays. The test I conducted was to delete all the records from a single large table and then check the change in "space used" across all 3 data files. If the object was stored in a single file, then that one file should show all of the change. This is not what I found. The decrease is used space was fairly even across all 3 files. I would think that woudl indicate that SQL does truely stripe the data across all files in the filegroup.

    You are actually right; I've seen that, too. But I've read somewhere - and I do not remember where :oops: - that this is supposedly an exceptional behavior; by default SQL Server should try (so I read/was told?) to keep an index-object together in one file.
    Or maybe I mix something up here ... ?

    Anyway; I'd rather create dedicated filegroups for specific tables/objects; but then again, this would be the last line of defence.

    Cheers,
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • davmac1davmac1 Member Posts: 1,283
    I have seen poorly defined indexes, SQL keys not matching NAV keys, calculated fields, terminal server, etc. affecting performance.
    Since you have a list form that is really slow, that is the simplest to figure out.
    Define a new list form and add all the non flowfields and see if that runs quickly.
    If it does, start adding flowfields until it slows down, then look at the culprit(s).

    Also, make sure there is really not a calculated field in the list. This does take code, which you say you don't have in the form. Calculated fields are not bad in themselves - it is just something else to look at.

    32 bit SQL Server can make a big difference in performance speed as well, and takes special settings to use extended RAM - with limitations.
    So does the amount of RAM. How much RAM on your SQL Server, and is it the only server app?
  • strykstryk Member Posts: 645
    Steve,

    to investigate the problems more thoroughly, maybe you would like to check out these:

    Index related problems: http://www.mibuso.com/dlinfo.asp?FileID=1398
    Block related problems: http://www.mibuso.com/dlinfo.asp?FileID=1488
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
Sign In or Register to comment.