RAID10 configuration for 6 drives

NavStudent
Member Posts: 399
Following on this thread. I've opened a new one.
http://www.mibuso.com/forum/viewtopic.php?p=127281
My Basic question is how can you have RAID 10 configuration with 6 drives.
Thank you.
http://www.mibuso.com/forum/viewtopic.php?p=127281
My Basic question is how can you have RAID 10 configuration with 6 drives.
Thank you.
my 2 cents
0
Comments
-
RAID 0 = stripe
RAID 1 = Mirror
RAID 1 and 0 = stripe AND mirror. This is sometimes called RAID10, RAID1+0, RAID01 or RAID0+1, which are all different varieties of a nested RAID level where disks are striped and mirrored at the same time without parity.
Depending on the settings on the controller it either stripes mirrored pairs, or it will mirror striped arrays.
So, with 6 disks:
Stripe ((Mirror 1+a) (Mirror 2+b) (Mirror 3+c))
or
Mirror ((Stripe 1 2 3) (Stripe a b c))
<edit>modified for better semantics</edit>0 -
-
HiNavStudent wrote:My Basic question is how can you have RAID10 configuration with 6 drives.
Another question which arises is - how are you going to select stripe size to balance the load across 3 disk pairs.
to DenSter: RAID10 is always a stripe over mirrored pairs regardless of controller. Mirror of striped set is called RAID01
Regards,
SlawekSlawek Guzek
Dynamics NAV, MS SQL Server, Wherescape RED;
PRINCE2 Practitioner - License GR657010572SG
GDPR Certified Data Protection Officer - PECB License DPCDPO1025070-2018-030 -
Thanks Slawek,
So it RAID 10 of 6 disk is
Stripe ((Mirror 1+a) (Mirror 2+b) (Mirror 3+c))
And RAID 10 of 8 disk would be
Stripe ((Mirror 1+a) (Mirror 2+b) (Mirror 3+c)(Mirror 4+d))
As far as the size, wouldn't it be at bit or Byte level?
How much of performance gain do you get for RAID 10 of 8 disk compared of RAID 10 of 4 disks?my 2 cents0 -
Hi
Yes, if I understood correctly your syntax
In n-drive RAID0 array any block written to disk is splitted to n-parts and written in parallel - each part to its own disk
In RAID10 instead of single disk you have two mirrored disks, and stripping works the same way.
In theory you should get 50% write speed improvement using 8 instead of 4 disks. My practice is quite close to that theory
IMHO if you have 8 disks it will be better if you configure two disks in RAID1 or or four in RAID10 and use this drive exclusively for single log file and anything, literally anything else. My point is that this prevents your log disks from doing any seeks, which should give you real performance boost with transactions commiting.
The rest of 6 (or 4) disks you may configure as two arrays for data and log files for tempdb for example.
Regards,
SlawekSlawek Guzek
Dynamics NAV, MS SQL Server, Wherescape RED;
PRINCE2 Practitioner - License GR657010572SG
GDPR Certified Data Protection Officer - PECB License DPCDPO1025070-2018-030 -
Slawek Guzek wrote:to DenSter: RAID10 is always a stripe over mirrored pairs regardless of controller. Mirror of striped set is called RAID01
I know that for some hardware purists it makes a big difference between 1+0 and 0+1, and I understand the difference, but I have heard people who call themselves hardware specialists explain it both ways, so that's why I am not trying to make any claim one way or the other.0 -
Hi,DenSter wrote:Depending on the controller it either stripes mirrored pairs, or it will mirror striped arrays.
1. definitions of RAID10 (or RAID1+0) and RAID01 (or RAID0+1) are quite precise,
2. there are significant differences in performance and protection levels.
It is not about being or not hardware or technical language purist, because those differences are not only in the name.
Looks like we didn't understand each other
Regards,
SlawekSlawek Guzek
Dynamics NAV, MS SQL Server, Wherescape RED;
PRINCE2 Practitioner - License GR657010572SG
GDPR Certified Data Protection Officer - PECB License DPCDPO1025070-2018-030 -
I did not say RAID10 is one thing, I did not say RAID01 is another thing, exactly because I wanted to avoid this discussion. What I said was:DenSter wrote:RAID 0 = stripe
RAID 1 = Mirror
RAID 1 + 0 = stripe AND mirrorDenSter wrote:So, with 6 disks:
Stripe ((Mirror 1+a) (Mirror 2+b) (Mirror 3+c))
or
Mirror ((Stripe 1 2 3) (Stripe a b c))
So, I edited my original post, just to make it perfectly clear what I meant0 -
Slawek_Guzek wrote: »
In theory you should get 50% write speed improvement using 8 instead of 4 disks. My practice is quite close to that theory
IMHO if you have 8 disks it will be better if you configure two disks in RAID1 or or four in RAID10 and use this drive exclusively for single log file and anything, literally anything else. My point is that this prevents your log disks from doing any seeks, which should give you real performance boost with transactions commiting.
The rest of 6 (or 4) disks you may configure as two arrays for data and log files for tempdb for example.
Regards,
Slawek
Dear Slawek, I have 6 HDD and need to setup Proxmox VE for a few VMs and have 2 questions on your explanation.
1. You wrote that 8 disks lead to ~50% write speed improvement comparing with 4 disks, but then you recommend to take away some part of 8 those disks for another RAID array. That action will reduce speed improvement for the data array, will it? And do you mean that "real performance boost with transactions commiting" will prevail this reduce, I mean make it tolerable?
2. The rest of disks - 6 or 4 devices - you wrote "configure as two arrays". Do you mean to configure literally 2 arrays (RAID1 each ?)? If 2 arrays, why? Why two RAID1 arrays are more preferable than one RAID10 that provides speed improvement?
Thank you!0 -
Keep in mind that these older discussions were concerned with configuring the old-style "spinning" disk. They are less relevant to modern SSD drive systems. Although RAID still can play a role today.
There are no bugs - only undocumented features.0 -
In the world of spinning disks, there is a delay caused by mechanics - a seek delay when heads need to adjust their position and a rotational delay when disk heads wait until the rotating disk plate is at the 'correct' position to read or write data.
If you have one large array partitioned between different logical disks the different workloads hits the same disks - forcing it to constantly change tracks and wait for the correct sectors
Why different workloads? Transaction log in SQL Server is sequentially written. Every transaction on commit is written to a disk - bypassing any hardware disk cache. A transaction is not committed until the OS confirms that the data hit the drive.
Data file(s), on the other hand, are read as required and written usually periodically at every checkpoint, but when there is a shortage of memory for SQL Server to read new data the old one could be potentially written at any time, Therefore when log writes and data reads/writes hitting the same physical disks this forces them to constantly change their heads positions. Which costs time - a lot of time.
if you take a pair of disks from a larger array you'll make it a bit slower. But then you 'sort' the disk operations - transaction log sequential writes go to separate disks, which rarely need to change their heads' position. Eliminating disk seeks and waits gives much more write performance boosts than extra disks added to an RAID array.
That's in the world of spinning disks...
As @bbrown mentioned - with SSDs in place all the above no longer matters that much. It still matters just a bit as still SSDs are better at sequential operations than random ones. But overall SSD speed makes those far less relevant.
Slawek Guzek
Dynamics NAV, MS SQL Server, Wherescape RED;
PRINCE2 Practitioner - License GR657010572SG
GDPR Certified Data Protection Officer - PECB License DPCDPO1025070-2018-030 -
Slawek_Guzek wrote: »Why different workloads? Transaction log in SQL Server is sequentially written. Every transaction on commit is written to a disk - bypassing any hardware disk cache. A transaction is not committed until the OS confirms that the data hit the drive.
Data file(s), on the other hand, are read as required and written usually periodically at every checkpoint, but when there is a shortage of memory for SQL Server to read new data the old one could be potentially written at any time, Therefore when log writes and data reads/writes hitting the same physical disks this forces them to constantly change their heads positions. Which costs time - a lot of time.
Dear Slawek, thank you for the clear explanation! Yes, we still use HDD disks, so it all matters to us.
Could you also give a link(s) to practical articles/videos about this multi-RAID method if you have? Or maybe also any extra explanation in any other topic?
For example, we have Zimbra mail server, and I'm not sure it can even allow to specify separated database location during installation process. So it's not clear should we consider the whole server as data or database regarding multi-RAID method.
I mean it's all clear in your explanation, but no doubt we'll have extra questions when we setup our specific platform, extra link(s) would be very useful if you have.0
Categories
- All Categories
- 73 General
- 73 Announcements
- 66.6K Microsoft Dynamics NAV
- 18.7K NAV Three Tier
- 38.4K NAV/Navision Classic Client
- 3.6K Navision Attain
- 2.4K Navision Financials
- 116 Navision DOS
- 851 Navision e-Commerce
- 1K NAV Tips & Tricks
- 772 NAV Dutch speaking only
- 617 NAV Courses, Exams & Certification
- 2K Microsoft Dynamics-Other
- 1.5K Dynamics AX
- 320 Dynamics CRM
- 111 Dynamics GP
- 10 Dynamics SL
- 1.5K Other
- 990 SQL General
- 383 SQL Performance
- 34 SQL Tips & Tricks
- 35 Design Patterns (General & Best Practices)
- 1 Architectural Patterns
- 10 Design Patterns
- 5 Implementation Patterns
- 53 3rd Party Products, Services & Events
- 1.6K General
- 1.1K General Chat
- 1.6K Website
- 83 Testing
- 1.2K Download section
- 23 How Tos section
- 252 Feedback
- 12 NAV TechDays 2013 Sessions
- 13 NAV TechDays 2012 Sessions