Multiple NAV SQL dbs on Production server setup

DoosanDoosan Member Posts: 3
edited 2010-02-22 in SQL Performance
We are a large international company and use multiple production Navision SQL instances (NAV 4, 5 and 2009). We have about 500 users evenly deployed across mainland Europe and the Americas. The usage time is during normal business hours locally. We have 12 instances ranging from 2GB to 100GB.

We are migrating to a new SAN and at the same time moving to SQL 2005 from 2000. We will have 11TB of raw disk space comprised of 22 15k HDDs. We plan on using RAID 10 for data and RAID 1 or RAID 10 for the log files.

We've read a number of blogs about SQL performance tuning with regards to the log files on individual disks, but the normal assumptions refer to a single instance. We are looking for guidance on the best way to configure the container(s) for multiple log files.

Does RAID 1 provide enough performance for log files?
Should all 12 log files be on the same RAID array?
Is there a benefit to split the data files across mutliple RAID arrarys?
What is the proper block size for NAV/SQL disks?

We've ordered the NAV/SQL Performance Field Guide by Jorg Stryk and hope that the guide plus any responses here will help.

Comments

  • navinbnavinb Member Posts: 51
    Hi,

    For SAN,if no. of concurrent users is 151-250
    No. of disks for data,no. of drives for must be 6-7 ,mirrored @ 15k RPM RAID10
    for transaction log , no. of drives must be 2HDD mirrored @ 15k RPM RAID10

    There must be separate disk SQL Database backups & transaction log backups.if multiple log files reside on the same disks the read/write head will need to switch back and forth potentially causing performance issues.NAV it not a heavy user of TempDB; the decision of whether or not to place TempDB on its own set of drives will depend on the volume and usage.Same goes for SQL extra data files and file groups.

    -
    Navin
  • navinbnavinb Member Posts: 51
    Allocation size must be >= 8k
  • David_SingletonDavid_Singleton Member Posts: 5,479
    navinb wrote:
    NAV it not a heavy user of TempDB; the decision of whether or not to place TempDB on its own set of drives will depend on the volume and usage.

    As a global statement this is not quite true. It depends on a lot of things, especially which version of Navision, which version of SQL, and if the C/AL code was optimized for that specific combination of versions. I am tuning a 400Gig database right now, and the biggest bottle neck we have is the TempDB. It really is dependent on so many different factors, that it is ABSOLUTELY impossible to state ideal specifications without knowing the specifics of the implementation.
    David Singleton
  • krikikriki Member, Moderator Posts: 9,115
    -Why SQL2005 and not directly SQL2008? With that setup, I suppose you have SQL Server Enterprise Edition. SQL 2008 Enterprise has the advantage that backups can be compressed what helps a lot for backup speed and storage needs. It is also possible to compress tables and indexes. This last can be interesting when doing SQL tuning, but don't use it as a standard.

    -allocation size must be 64 KB, striping must be 64 KB, offset must be a multiple of 64 KB.

    -log files: ideally would be : each log its own RAID1 pair. But that would directly use 24 disks (considering you only have 22....).
    You should have at least 1 disk as a hot standby. Don't use RAID5 because it is slow in writing and the possibility that when 1 disk dies, also another dies is too great with current disk sizes (see http://blogs.zdnet.com/storage/?p=162 and there are also others who state the same problem).
    So 2 disks are hot spare (1 would be enought, but how would you use the other one? Could be useful as temporary storage that does need to be mirrored).

    Leaving 20 disks for temp,DB,TL.
    Also 2 disks in RAID10 for tempDB.

    Leaving 18 disks for DB,TL.
    I would use 8 disks in RAID10 for all DB's (RAID 10's optimum performace is 4 - 8 - 16 - 32 disks), leaving 5 RAID10 pairs to be divided between the different transaction logs. You can put together some lesser used DB's. Or DB's that are active in different moments of the day. Not all DB's are active during the same time of the day, so the 8 RAID10 disks should be able to handle all (I don't know if the can handle the size of the DB's. If I got it correct, each disk is 512 GB giving you a total of 2 TB diskspace for the DB's).
    Your server has best 2 quadcores and as much as possible memory.
    Transaction log files should be on even more dedicated disks than the DB files because they are at least 95% write access. DB-files are around 10%-20% write access.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • strykstryk Member Posts: 645
    kriki wrote:
    Also 2 disks in RAID10 for tempDB.
    Just a little error, I guess you mean RAID1 (2 x HDD), which should be sufficient enough ...

    @Doosan:
    You're mentioning "12 instances". Does this mean you are going to run 12 Databases on one SQL Server, or 12 SQL Server Instances (one for each NAV DB) on the same Server machine?
    Basically the size/configuration of the disk-subsystem depends on the transaction volume each instance produces. I guess there might be instances which need dedicated/exclusive disks - especially for the Transaction Log -; others (the small ones) maybe could share some storage.

    Generally, when it's about disks you need to "think spindles, not space", and IMHO just 22 disks is not really that much ...
    Kriki already pointed out how a config could look like, here another overview (assuming one medium instance):

    C:\ 2 x SAS 15k RAID1 OS, Page File, Programs, SQL Server
    D:\ 2 x SAS 15k RAID1 tempdb (multiple files, e.g. 4 to 6)
    E:\ 6-10 x SAS 15k RAID10 NAV Database (mdf, ndf)
    F:\ 2-4 x SAS 15k RAID1[0] NAV Database (ldf)
    [G:\ 2-? x SAS 15k RAID1[0] Local Backups]

    = 12 to 18 HDD + 1 Hot Spare (plus Backup Volume)

    Then you should have at leat 8GB RAM (small instances) up to 32GB+ for the larges databases (again: this depends on the transaction volume). Regarding the CPU you should calculate one physical CPU (e.g. one HexCore) per 100 processes.
    Further, you should definitely go for SQL Server 2008, ideally "Enterprise Edition". And of course 64bit architecture is mandatory.

    Depending on your company structure and your demands about how to deal with these multiple instance, maybe you should consider to deploy a large ESX Server (or other) plus a huge/suffieicent SAN and run the instances on Virtual Servers (like VMWare vSphere etc.).

    I have a customer running 33 "instances", each a separate SQL Server installation, on 3 ESX Servers plus a huge SUN Storage. Of course, that's not cheap, but this provides lots of advantages regarding scalability and high-availability ... to be thoroughly discussed.
    Big SAN solutions (e.g. NetApp FAS 3x/6x & Co.) are using "Fibre Channel" and are working basically with NVRAM (several GB!) caching (nothing to do with simple HDD read/write caches), this means - simplified - that the (SQL) Servers are saving the data virtually to a RAM disks which is maximally fast. The real physical storage happens asynchronously.
    Again: implementing real SAN solutions must be thoroughly discussed, and also these SANs have to be configured/set up properly - and any mistake will not be forgiven ...

    So last but not least, it's the old dilemma: technical requirements versus budget ](*,)

    Cheers,
    Jörg

    P.S: Thanks for buying my little book. Hope it couold help you!
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • krikikriki Member, Moderator Posts: 9,115
    stryk wrote:
    kriki wrote:
    Also 2 disks in RAID10 for tempDB.
    Just a little error, I guess you mean RAID1 (2 x HDD), which should be sufficient enough ...
    That typo escaped me. I meant RAID1.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • DoosanDoosan Member Posts: 3
    Thanks for all the suggestions. Here is some clarification on our environment and the root of our questions. We currently operate on a 32-bit Windows Server 2003 Enterprise, SQL 2000, virtual environment using VMware on Windows hosts. The SQL server has 2-dual core CPUs and 10GB RAM. Due to hardware restrictions, our DBs and Logs are on the same RAID10 array.

    We are migrating to an ESX virtualized 64-bit Windows Server 2003 Enterprise, 2-quad cores, ~30GB RAM, SQL 2005 (because we already own it and we have older versions of NAV to support). The SAN is going to be a high end Hitachi. Our experience with bottlenecks has been in the disk I/O, hence the origin of the questions in this forum.

    We meant 12 individual DBs, not SQL instances. Is there a benefit to running separate instances? The question remains, if we don't have a choice and have to put more than one log file on a single array, should we configure multiple RAID1 array's with 2-3 files each or would it be faster to configure a RAID10 with more drives and put all the logs on that array? Keep in mind that of the 12 DBs only 4 are high transactions. They are spread across 7 timezones, but do overlap 3 hours each day.

    How could we use NVRAM in the SAN to increase performance?
    How much would we need?
    Would that impact our decisions regarding RAID configuration?
  • strykstryk Member Posts: 645
    Well, if you're running all 12 DB on one SQL Server instance, this means if this instance is down, then ALL your companies are down. Running multiple SQL Server instances will allow you to handle each differently, e.g. when it's up to maintenance, upgrading etc. - as you are spraed over multiple time-zones, this might be an issue. Of course, this would be more costly, as each instance has to be licensed.

    Regarding the TLogs:
    All write-transactions have to be logged here BEFORE they are actually executed. Hence, it depends on the speed of the TLog volume - the faster it reacts, the faster SQL Server will be able to process. To gain maximum disk speed it is necessary to leave a TLog file completely alone on its drive, which means that the read/write-head of the disk(s) is permanetly in the position where to write. As soon as anything else is stored on this drive, keeping the spindle(s) busy, SQL Server has to reposition to write to the TLog; and this simply takes time.
    So at least for the 4 "heavy volume" database you shouold consider running dedicated TLog drives.
    But: the response-time of a physical drive could be dramatically improved by using caches. Normally disk-controller provide some write-cache, enabling it will increate perfromance. Downside is, if this cache is accidentally flushed (power loss) your database is corrupted.
    NVRAM usually is abig cache (2GB and up) as part of the "Fibre Channel" headers. SAN using such a cache are committed the I/O as soon as data is in that cache, the actual physical storage happens asynchronously. Hence, the respose time is maximally fast, it does NOT depend on the physical speed of spindles etc..
    The SQL Server is virtually writing to a RAM disk.
    Again: using Caches requires sufficient infrastructure regarding independent power supplies etc.!

    If the Cache/NVRAM is big enough (hard to tell what IS enough), then the underlying RAID configuration is less important. Many SAN are using RAID6 or RAID5DP (striping, double parity) or similar here.
    Using "parity" is only recommended IF using caches - e.g. a directly attached RAID5 volume would totally degrade write-performance ...

    But also big SAN with plenty of NVRAM and Spindles MUST be configured properly, else read-performance could be affected. On heavily used database you might need a dedicated/segregated physical targets/aggregates. In several cases you might also need dedicated aggreates for the Transaction Log, hence, getting close to a "standard-directly-attached-RAID-architecture".

    Finally, it really depends on your specific purposes, so you should determine per each instance:

    - estimated/current size
    - estimated/calculated growth
    - transaction volume
    - number of concurrent users

    Then you should classify them, as "small", "medium" and "large". Then elaborate platform specifications per class.
    Then you should see the amout of hardware resources (CPU, RAM, Disks) you actually NEED.
    Then look into the hardware which suits your NEEDS - size properly.
    Or short: the requirements should define the platform. NOT vice versa.
    It would be a critical mistake to just buy some hardware (even if you buy a lot) and THEN look how to sqeeze the system into it ...
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
Sign In or Register to comment.