Arch = x86_64 CentOS-6.4 We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap cells. The intended purpose of this system is as an ERP application and DBMS host. The ERP application will likely eventually have web access but at the moment only dedicated client applications can connect to it. I am researching how to best set this system up for use as a production host employing RAID. I have read the (minimal) documentation respecting RAID on the RedHat site and have found and read a few online guides. Naturally, in my ignorance I have a bunch of questions to ask and I probably have a bunch more that I should but do not know enough yet to ask.>From what I have read it appears that the system disk must use RAID 1 if ituses RAID at all. Is this the case? If so, is there any benefit to be had by taking two of the 8 drives (6Tb) solely to hold the OS and boot partition? Should these two drives be pulled and replaced with two smaller ones or should we bother with RAID for the boot disk at all? Given that one or two drive bays will be given over to the OS what should be the configuration of the remaining six? It appears from what I have read that RAID 5 is the only viable option. It also appears that the amount of storage available on a RAID5 array with N members is N-1/N. I also read that as the number of members increase both latency and the risk of data loss increases. As the amount of disk space we have in this unit (24Tb) is greater than the total storage of all our existing hosts it appears that a RAID5 array of 5 units would leave at least one hot spare in the chassis and two if the OS is put on one disk. Alternatively, the thought comes to mind that we could do a RAID1 with two RAID5 arrays each of which have 3 drives. Whether one would actually want to do that seems to me a bit questionable but it seems to be at least possible. Comments, suggestions, caveats? Regards, -- *** E-Mail is NOT a SECURE channel *** James B. Byrne mailto:ByrneJB at Harte-Lyne.ca Harte & Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3
James B. Byrne wrote:> Arch = x86_64 > CentOS-6.4 > > We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in > hotswap cells. The intended purpose of this system is as an ERPapplication and> DBMS host. The ERP application will likely eventually have web accessbut at> the moment only dedicated client applications can connect to it. > > I am researching how to best set this system up for use as a production > host employing RAID. I have read the (minimal) documentation respectingRAID> on the RedHat site and have found and read a few online guides. Naturally, > in my ignorance I have a bunch of questions to ask and I probably have abunch> more that I should but do not know enough yet to ask. > >>From what I have read it appears that the system disk must use RAID 1 if >> it uses RAID at all. Is this the case? If so, is there any benefit to beWhy? Where did you get that idea? On all of our large RAIDs, we're using RAID 6 - remember, with RAID 1, you have half the space of the physical drives.> had by taking two of the 8 drives (6Tb) solely to hold the OS and bootpartition? Are you doing Linux software RAID, or do you have a hardware RAID controller? Me, I'd partition it up... lessee, the 3TB drive that I'm about to replace one of our user's root drive has 1G /boot, 2G swap, 500G /, and a 4th partition with the rest of the space.> > Given that one or two drive bays will be given over to the OS what should > be the configuration of the remaining six? It appears from what I haveread> that RAID 5 is the only viable option. It also appears that the amount ofSounds old news, to me. As I said *all* of our large RAIDs are running 6.> As the amount of disk space we have in this unit (24Tb) is greater thanNote that I'd make it 14TB and 10TB, or perhaps 500G, maybe RAID 1, for a total of 1TB, then 14TB and 9TB.> the total storage of all our existing hosts it appears that a RAID5array of 5> units would leave at least one hot spare in the chassis and two if the OS > is put on one disk.Yeah, a hot spare's a good idea. <snip> mark
On Thu, Nov 14, 2013 at 12:23 PM, James B. Byrne <byrnejb at harte-lyne.ca>wrote:> Arch = x86_64 > CentOS-6.4 > > We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in > hotswap > cells. The intended purpose of this system is as an ERP application and > DBMS > host. The ERP application will likely eventually have web access but at > the > moment only dedicated client applications can connect to it. > > I am researching how to best set this system up for use as a production > host > employing RAID. I have read the (minimal) documentation respecting RAID on > the RedHat site and have found and read a few online guides. Naturally, > in my > ignorance I have a bunch of questions to ask and I probably have a bunch > more > that I should but do not know enough yet to ask. >Are you going to use hardware or software raid?> > >From what I have read it appears that the system disk must use RAID 1 if > it > uses RAID at all. Is this the case? If so, is there any benefit to be > had by > taking two of the 8 drives (6Tb) solely to hold the OS and boot partition? > Should these two drives be pulled and replaced with two smaller ones or > should > we bother with RAID for the boot disk at all? > > Given that one or two drive bays will be given over to the OS what should > be > the configuration of the remaining six? It appears from what I have read > that > RAID 5 is the only viable option. It also appears that the amount of > storage >What about RAID10? I've read that running a database server on raid5 isn't recommended, but raid1 or raid10 is recommended.> available on a RAID5 array with N members is N-1/N. I also read that as the > number of members increase both latency and the risk of data loss > increases. > As the amount of disk space we have in this unit (24Tb) is greater than the > total storage of all our existing hosts it appears that a RAID5 array of 5 > units would leave at least one hot spare in the chassis and two if the OS > is > put on one disk. >Space efficiency is less than that of raid5. Rather than 1-1/n with raid5 you have 2/n with raid10.> > Alternatively, the thought comes to mind that we could do a RAID1 with two > RAID5 arrays each of which have 3 drives. Whether one would actually want > to > do that seems to me a bit questionable but it seems to be at least > possible. >You're suggesting a raid5+1 or raid51 http://en.wikipedia.org/wiki/Nested_RAID_levels I wouldn't suggest nesting software raid if you can avoid it for the complexity. There are reasons to create a raid array with two hardware arrays, but I'd avoid doing so.> > Comments, suggestions, caveats? > > Regards, > > -- > *** E-Mail is NOT a SECURE channel *** > James B. Byrne mailto:ByrneJB at Harte-Lyne.ca > Harte & Lyne Limited http://www.harte-lyne.ca > 9 Brockley Drive vox: +1 905 561 1241 > Hamilton, Ontario fax: +1 905 561 0757 > Canada L8E 3C3 > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos >-- ---~~.~~--- Mike // SilverTip257 //
On Thu, November 14, 2013 12:51, Reindl Harald wrote:> > > Am 14.11.2013 18:23, schrieb James B. Byrne: >> From what I have read it appears that the system disk must use RAID 1 if it >> uses RAID at all. > > no! > > /boot must be RAID1, see below > md0: /boot > md1: / > md2: /data > > [root at srv-rhsoft:~]$ cat /proc/mdstat > Personalities : [raid1] [raid10] > md2 : active raid10 sda3[4] sdb3[3] sdc3[5] sdd3[0] > 3875222528 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU] > bitmap: 4/29 pages [16KB], 65536KB chunk > > md1 : active raid10 sda2[4] sdb2[3] sdc2[5] sdd2[0] > 30716928 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU] > bitmap: 0/1 pages [0KB], 65536KB chunk > > md0 : active raid1 sda1[4] sdb1[3] sdd1[0] sdc1[5] > 511988 blocks super 1.0 [4/4] [UUUU] >So, this is saying, if I read it aright, that one can have multiple RAID arrays spread over the same spindles but each in differing partitions. Is that right? I am just getting started with this so I am trying to fit what I am reading regarding RAID with what I have dealt with in the past, mainly LVM ext3 volumes. So I am doubtless just not getting it in some important way. BTW, I intend to install CentOS-6.4 with software RAID as the eight disks are mounted in the system chassis. As far as I can tell, there is no hardware RAID controller (unless there is one on the MB, in which case SW Raid is likely a better choice anyway. Regards, -- *** E-Mail is NOT a SECURE channel *** James B. Byrne mailto:ByrneJB at Harte-Lyne.ca Harte & Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3
From: James B. Byrne <byrnejb at harte-lyne.ca>> We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap > cells.? The intended purpose of this system is as an ERP application and DBMS > host.? The ERP application will likely eventually have web access but at the > moment only dedicated client applications can connect to it. > > I am researching how to best set this system up for use as a production host > employing RAID.? I have read the (minimal) documentation respecting RAID on > the RedHat site and have found and read a few online guides.? Naturally, in my > ignorance I have a bunch of questions to ask and I probably have a bunch more > that I should but do not know enough yet to ask. > >> From what I have read it appears that the system disk must use RAID 1 if it > uses RAID at all.? Is this the case?? If so, is there any benefit to be had by > taking two of the 8 drives (6Tb) solely to hold the OS and boot partition? > Should these two drives be pulled and replaced with two smaller ones or should > we bother with RAID for the boot disk at all? > > Given that one or two drive bays will be given over to the OS what should be > the configuration of the remaining six?? It appears from what I have read that > RAID 5 is the only viable option.? It also appears that the amount of storage > available on a RAID5 array with N members is N-1/N. I also read that as the > number of members increase both latency and the risk of data loss increases. > As the amount of disk space we have in this unit (24Tb) is greater than the > total storage of all our existing hosts it appears that a RAID5 array of 5 > units would leave at least one hot spare in the chassis and two if the OS is > put on one disk. > > Alternatively, the thought comes to mind that we could do a RAID1 with two > RAID5 arrays each of which have 3 drives.? Whether one would actually want to > do that seems to me a bit questionable but it seems to be at least possible.For some storage servers, we put used the following cards with 2 small drives for the system: ? http://www.sybausa.com/productInfo.php?iid=1134 But do you really need 10+ TB for your ERP+DBM? I'd just go with a RAID10 for DBs... JD