Virtual Raid

deondutoitdeondutoit Member Posts: 3
edited 2005-05-27 in Navision Attain
Client of us is experiencing performance problems. I am from the hardware side of the industry and know very little about the DB architecture etc. I've read the strings on raid configs and that Raid 1(mirroring) is best but we experienced the following problems.
We replaced the clients server - compaq 370 G2 with hp dl360G4. (MS2k3) Client had external disk with a raid controller addressing 30x 36G disks in Raid1 config and configured as 3 x 10 mirrored drives. We replaced this with the HP Enterprise virtual array(EVA)3000 san disk subsystem. Client use Attain DB. The HP EVA is using a new type of raid called Virtual Raid. (it basically means that the config is striped over all disks in the subsystem and then virtual disks (Vdisks) are configured on this volume and presented/allocated to a server/os.) Each virtual disk can be configured in virtual raid(Vraid) 0, 1 or 5 - this is hardware raid done by a controller and not s/w raid on OS level. At first we had it in a virtual raid 5 - we experienced slow performance. After reading the docs etc. we configured it in a virtual raid 1 - what happened was, was that the cpu on the server stuck at 100% utilisation and performance and response dropped to a standstill. i.e the best raid recommendation is not working for us. we changed back to Vraid 5 and everything is working, but not very fast. (oh yes, DB size is 150GB with average 200 concurrent users)
Has anybody used a HP EVA disk subsystem (or any SAN-type solution) for their installation or experienced similar problems?
Caching options were also tried out, but with no effect.
SAN disk is supposed to be very fast - 200 Gb/s and writing data across all disks with Virtual Raid 1 is supposed to deliver high performance.
Any help or suggestions?

Comments

  • UrmasUrmas Member Posts: 76
    Just to make sure - You say that they are using Attain DB. Native database should be limited to 132 GB or so. Do you really use Native (Attain DB) there?
  • deondutoitdeondutoit Member Posts: 3
    Yes, it's definately the native database. I know the client have sliced the database into three equal LUN's (presented vdisks or OS driveletters) of +/- 70-75G each.
  • Ian_piddigntonIan_piddignton Member Posts: 92
    Ok Ihaven't worked with Virtual RAID so some of these may be dumb questions but here goes.

    Are the virtual disks split over the real disk in a logical way eg Virtual disk 1 is on real disks 1 -3 and virtual disk 2 is on disk 4 -6 etc or are they set up so that the virtual disks split over all the real disk eg Virtual disk 1 uses all real disks, virtual disk 2 also uses all real disks?

    Navision will be splitting its writes over the 3 database parts which is why normally it is important that each part is on its own physical disk/array of disks.

    If you are getting into a set up where the navision parts are on virtual disk's but these virtual disk's all use the same physical disks you maybe running into problems

    Hope that makes sense.

    Ian
  • kinekine Member Posts: 12,562
    TIP: How big is the strip size on the disks? (8,16kB, 32, 64)? I assume the EVA is connected though fieber channel to system - in this case is best practice to have 64Kb strip size because too small size packets on fieber channel have big overhead and it means low transfer rate...
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
  • jae3240jae3240 Member Posts: 13
    We are running into performance issues on the HP MSA1000 SAN very similar to the issues expressed above. I have checked into the SAN config and the "Stripe Size" is set to 128kb. Is there a benefit to having this larger then what you recommended (64kb)? Or could it be causing us problems?

    We also had set the Read/Write Cache at Read 100% and Write 0% (as Recommended by MBS). The performance was terrible. We then reset it back to 50% & 50%. What would be recommended for this?

    Thanks,
    John
  • kinekine Member Posts: 12,562
    May be that size 128kB is optimum for you..., but it depends, MS SQL is reading and writing 8kB or 64kB blocks of data - if your block is bigger, there is some IO splitting which can take some time (but I do not know how much) and again - the fieber connection is happy with large blocks because small overhead...

    some more info:

    Controller cache settings 100% Read, no write back cache.
    Cache block size 8kB.
    Cache Read Ahead Factor = 0 - because there are random I/O activities.
    Cache Flush percentage - irrelevant - no write back cache...


    Write back cache can speed up system only if there are spikes in writing to the disks. But if there are longer write activities, writeback cache can slow down the process and there is possibility of data lost (but the system must be prepared for crashes if it is such a system and the possibility is too low... :-) For database disks is in most cases better to disable write cache and let system write the data directly to disks.
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
Sign In or Register to comment.