Options

SQL Server Disk Array Configuration

tannatontannaton Member Posts: 3
edited 2010-12-11 in SQL General
I guess this has probably been covered before, if so please accept my apologies, but we are setting up a server which will be used to upgrade a Nav 3.70 database (which has already had a technical upgrade to SQL 2008) to Nav 2009 and will be then become the permanent home of that database. Hardware is as follows:

Server: Dell PowerEdge R900, 32Gb RAM, 4x Intel Xeon E7330 2.4GHz Quad Core CPU, Windows Server 2003 x64 Std Edition, SQL Server 2008 Enterprise Edition (RTM code, unpatched) (Hyperthreading is disabled in the BIOS).

Disks: The server has 2 x 146Gb and 2 x 73Gb disks internally on a PERC6i internal RAID controller, There are 2 x Dell PowerVault MD1000 shelves each with 15 x 146Gb disks and each on their own dual channel PERC 6 controller. All disks are 15,000 RPM SAS. All virtual disks are set up as 64k stripe, write back cache enabled, adaptive read ahead enabled. (The 2 x 73Gb internal disks are used for the operating system).

Question - the upgrade is running slowly and the vendor is blaming the hardware.... Hence we would be interested in any suggestions as to how to configure the available disks in this server for maximum performance for a SQL NAV database of approx 200Gb in size. The server is dedicated an hosts no other processes or databases.

Many Thanks
Richard

Comments

  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    That is definitely enough hardware. 30/2 spindles is plenty. I have had clients with a terra byte on less than what you have.

    What is the current configuration. And how much cache on the controller.
    David Singleton
  • Options
    krikikriki Member, Moderator Posts: 9,090
    The server is ok. The problem is in the upgrade routines. They have been written to upgrade all tables and functionality in NAV (like it runs code to upgrade jobs even if you don't use them).
    So, what you need to do is check the upgrade routines and disable what you don't need to avoid processing that is useless.

    Another trick : if you have multiple companies, you can run all companies at the same time. BUT you need to change the upgrade tables to datapercompany=Yes. It is possible you need to change some code (very limited) to enable to parallel running of the companies. This is something you can do because 1 NAV-client is not able to fully occupy the server (meaning if 1 NAV session is running at full possible speed, SQL server is doing very little work!).

    Another trick (if you aren't doing it yet): run the upgrade routines directly on the server, connecting to it using the TCP nettype and 127.0.0.1 as TCP/IP address.


    BTW: how did you configure the different DB-files on the disks?
    You need a dedicated disk-set for transaction log, 1 for the DB and best 1 for the tempDB.
    I generally go for : 2 disks in RAID1 (or also 4 disks in RAID10) for .ldf-file(s) of ONLY 1 DB! So if you have 2 databases, you need a RAID 1 for each DB.
    2 disks in RAID1 for tempDB
    the rest in RAID10 for ALL the DB's (.mdf,.ndf).
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    tannatontannaton Member Posts: 3
    Thank you both for your replies, the further answer some points:

    The RAID Controllers are Dell PERC6 with 512 Mb cache, write back cache is enabled (with battery backup).

    We did have the system configured as follows: On the internal disks on the server, 2 x 73Gb RAID 1 for OS, 2 x 146Gb RAID 1 for TEMPDB. On the external disks, 1 shelf of 14 x 146Gb disks in RAID-10 (with hot spare) for the databases, same configuration again for the remaining shelf for the log files.

    However the reseller decided to change this to 4 volumes (achieved by splitting the backplane on the shelves to a 7+8 configuration and using 2 channels on each controller) which were 4 x 146Gb disks in RAID-10. The new volumes were set up, same as before, with 64k offset and 64k stripe. On the four new volumes the database was split into three files and each placed on its own volume, the remaining one for the log file.

    Although there seems to be some science to this, I can't see that the benefit would outweigh reducing the number of spindles in use from 28 to 16. Given that each channel on the controller is specified for 12Gbps, there seems to be little to be gained by going from using 2 channels to 4.

    What I'm aiming to do is get to a configuration that is, on paper, the most optimal so I can had it back to the reseller and tell them to get on with it...

    Thanks again for taking the time to reply.

    Richard
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    tannaton wrote:
    We did have the system configured as follows: On the internal disks on the server, 2 x 73Gb RAID 1 for OS, 2 x 146Gb RAID 1 for TEMPDB. On the external disks, 1 shelf of 14 x 146Gb disks in RAID-10 (with hot spare) for the databases, same configuration again for the remaining shelf for the log files.

    I would configure 24 drives as one RAID 10, so that you have alignment with a multiple of 8. Break the DB in to 100gig files both on this new RAID, then 4 drives in RAID 10 for the log file, log file broken into two files.
    David Singleton
  • Options
    Slawek_GuzekSlawek_Guzek Member Posts: 1,690
    log file broken into two files.
    May I know why ?
    Slawek Guzek
    Dynamics NAV, MS SQL Server, Wherescape RED;
    PRINCE2 Practitioner - License GR657010572SG
    GDPR Certified Data Protection Officer - PECB License DPCDPO1025070-2018-03
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    log file broken into two files.
    May I know why ?

    Its easier to manage. Especially if you need to shrink it.
    David Singleton
Sign In or Register to comment.