The recent thread on Anaconda and RAID10 made me start to think about how to partition a server I'm about to set up. I have two 146GB SCSI drives on an IBM x3550. It will be used as a build system. As such, there is no critical data on these systems, as the source code will be checked out of our source control system, and the build results are copied to another system. I usually build my systems with Kickstart, so if a disk dies, I can rebuild it quickly. Given all that, how would you partition these disks? I keep going back and forth between various options (HW RAID, SW RAID, LVM, etc.). I guess speed is more important to me than redundancy. I'm tempted to install the OS on one drive and use the entire second drive for data. This way I can rebuild or upgrade the OS without touching the data. But that will waste a lot of disk space, as the OS does not need 146GB. The only thing I'm pretty sure of is to put 2GB of swap on each drive, but after that everything is still in the air. I am looking for any and all suggestions from the collective wisdom and experience of this list. Thanks, Alfred
Am Mittwoch, 9. Mai 2007 schrieb Alfred von Campe:> The recent thread on Anaconda and RAID10 made me start to think about > how to partition a server I'm about to set up. I have two 146GB SCSI > drives on an IBM x3550. It will be used as a build system. As such, > there is no critical data on these systems, as the source code will > be checked out of our source control system, and the build results > are copied to another system. I usually build my systems with > Kickstart, so if a disk dies, I can rebuild it quickly. > > Given all that, how would you partition these disks? I keep going > back and forth between various options (HW RAID, SW RAID, LVM, > etc.). I guess speed is more important to me than redundancy. I'm > tempted to install the OS on one drive and use the entire second > drive for data. This way I can rebuild or upgrade the OS without > touching the data. But that will waste a lot of disk space, as the > OS does not need 146GB. > > The only thing I'm pretty sure of is to put 2GB of swap on each > drive, but after that everything is still in the air. I am looking > for any and all suggestions from the collective wisdom and experience > of this list.Ask yourself this question: Does the company loose money when the build system is down for restore? How much? How long does a restore take? Mirroring disks is not a replacement for backup. It is a way to improve availability of a system (no downtime when a disc dies), so it might even be interesting when there is no important data on the machine. If this is important for you use RAID-1 for the entire discs. If decreased availability is not a problem for you (you can easily afford a day of downtime when a disc dies) use RAID-0 for the entire discs. It will give you a nice performance boost. Especially on a build host people will love the extra performance of the disc array. A combination of RAID-0 and RAID-1 may also be an option: Make a small RAID-1 partition for the operating system (say 20GB) and a big RAID-0 partition for the data. This way you will get maximum performance on the data partition, but when a disc dies you do not need to reinstall the operating system. Just put in a new disc, let the RAID-1 rebuild itself in the background and restore your data. This can reduce the downtime (and the amount of work for you) when a disc dies considerably. HW vs SW RAID: Kind of a religious question. HW has some advantages when using RAID-5 or RAID-6 (less CPU load). When using RAID-0 or RAID-1 there should not be any difference performance wise. HW RAID gives you some advantages in terms of handling, i.e. hotplugging of discs, nice administration console, RAID-10 during install ;-), etc. It's up to you to decide whether it is worth the money. Plus you need to find a controller that is well supported in Linux. regards, Andreas Micklei P.s. Putting lots of RAM into the machine (for the buffer cache) has more impact than RAID-0 in my experience. Of course that depends on your filesystem usage pattern. P.p.s. Creating one swap partition on each disc is correct, because swapping to RAID-0 is useless. Only if you decide to use RAID-1 for the whole disc you should also swap to RAID-1. -- Andreas Micklei IVISTAR Kommunikationssysteme AG Ehrenbergstr. 19 / 10245 Berlin, Germany http://www.ivistar.de Handelsregister: Berlin Charlottenburg HRB 75173 Umsatzsteuer-ID: DE207795030 Vorstand: Dr.-Ing. Dirk Elias Aufsichtsratsvorsitz: Dipl.-Betriebsw. Frank Bindel
Alfred von Campe wrote:> The recent thread on Anaconda and RAID10 made me start to think about > how to partition a server I'm about to set up. I have two 146GB SCSI > drives on an IBM x3550. It will be used as a build system. As such, > there is no critical data on these systems, as the source code will be > checked out of our source control system, and the build results are > copied to another system. I usually build my systems with Kickstart, > so if a disk dies, I can rebuild it quickly. > > Given all that, how would you partition these disks? I keep going > back and forth between various options (HW RAID, SW RAID, LVM, etc.). > I guess speed is more important to me than redundancy. I'm tempted to > install the OS on one drive and use the entire second drive for data. > This way I can rebuild or upgrade the OS without touching the data. > But that will waste a lot of disk space, as the OS does not need 146GB. > > The only thing I'm pretty sure of is to put 2GB of swap on each drive, > but after that everything is still in the air. I am looking for any > and all suggestions from the collective wisdom and experience of this > list. >Three raid1 sets: raid1 #1 = / raid1 #2 = swap raid1 #3 = rest of disk on /home for the simple fact that a dead disk won't bring down your system and halt your builds until your rebuild the machine. But if you really only care about max speed and are not worried about crashes & their consequences, then replace the raid1 with raid0. I have no reason for using LVM on boot/OS/system partitions. If I have something that fills the disk that much, I move it to an other storage device. In your case, striped LVM could be used instead of raid0. -- Toby Bluhm Midwest Instruments Inc. 30825 Aurora Road Suite 100 Solon Ohio 44139 440-424-2250
Alfred von Campe wrote:> The recent thread on Anaconda and RAID10 made me start to think about > how to partition a server I'm about to set up. I have two 146GB SCSI > drives on an IBM x3550. It will be used as a build system. As such, > there is no critical data on these systems, as the source code will be > checked out of our source control system, and the build results are > copied to another system. I usually build my systems with Kickstart, so > if a disk dies, I can rebuild it quickly.I used kickstart, pxe, tftp and dhcp to manage a cluster of mail servers. Two disk 1U boxes too. /, swap, /var (logging was to a central log host).> > Given all that, how would you partition these disks? I keep going back > and forth between various options (HW RAID, SW RAID, LVM, etc.). I > guess speed is more important to me than redundancy. I'm tempted to > install the OS on one drive and use the entire second drive for data. > This way I can rebuild or upgrade the OS without touching the data. But > that will waste a lot of disk space, as the OS does not need 146GB.A scsi raid controller with write cache? Hardware raid definitely. Especially if the driver supports the write cache.> > The only thing I'm pretty sure of is to put 2GB of swap on each drive, > but after that everything is still in the air. I am looking for any and > all suggestions from the collective wisdom and experience of this list.swap should go on a raid1 device whether partition or swap file.
[Repost - for some reason my reply from earlier this morning did not go through] Thanks everyone for all your suggestions/comments.>Ask yourself this question: Does the company loose money when the build system >is down for restore? How much? How long does a restore take?No, no money lost. If I keep a spare drive, it should take less than an hour to restore the system.> Mirroring disks is not a replacement for backup. It is a way to improve > availability of a system (no downtime when a disc dies), so it might even be > interesting when there is no important data on the machine. If this is > important for you use RAID-1 for the entire discs.I would waste the most disk space, but this is certainly a possibility.> If decreased availability is not a problem for you (you can easily afford a > day of downtime when a disc dies) use RAID-0 for the entire discs. It will > give you a nice performance boost. Especially on a build host people will > love the extra performance of the disc array.But if either disk dies, the whole system is unusable. I don't think I will use this option.> A combination of RAID-0 and RAID-1 may also be an option: Make a small RAID-1 > partition for the operating system (say 20GB) and a big RAID-0 partition for > the data. This way you will get maximum performance on the data partition, > but when a disc dies you do not need to reinstall the operating system. Just > put in a new disc, let the RAID-1 rebuild itself in the background and > restore your data. This can reduce the downtime (and the amount of work for > you) when a disc dies considerably.Hmm, this sounds like a possibility. I have to figure out how to do this (I haven't used HW RAID before).> HW vs SW RAID: Kind of a religious question. HW has some advantages when using > RAID-5 or RAID-6 (less CPU load). When using RAID-0 or RAID-1 there should > not be any difference performance wise. HW RAID gives you some advantages in > terms of handling, i.e. hotplugging of discs, nice administration console, > RAID-10 during install ;-), etc. It's up to you to decide whether it is worth > the money. Plus you need to find a controller that is well supported in > Linux.Does anyone know if the RAID controller that comes in an IBM x3550 is supported on CentOS 4 & 5? I assume that it is.> P.s. Putting lots of RAM into the machine (for the buffer cache) has more > impact than RAID-0 in my experience. Of course that depends on your > filesystem usage pattern.The system has 4GB.> P.p.s. Creating one swap partition on each disc is correct, because swapping > to RAID-0 is useless. Only if you decide to use RAID-1 for the whole disc you > should also swap to RAID-1.Will do.> Three raid1 sets: > > raid1 #1 = / > raid1 #2 = swap > raid1 #3 = rest of disk on /home > > for the simple fact that a dead disk won't bring down your system and halt your > > builds until your rebuild the machine.Yes, I like that.> But if you really only care about max speed and are not worried about crashes & > > their consequences, then replace the raid1 with raid0.I like the earlier suggestions on combining RAID0 and RAID1.> I have no reason for using LVM on boot/OS/system partitions. If I have something > > that fills the disk that much, I move it to an other storage device. In your case, > striped LVM could be used instead of raid0.That's why I can't decide what the best approach is. So many different ways to skin this cat. Thanks, Alfred