I have had great luck with nvidia fakeraid on RAID1, but I see there are preferences for software raid. I have very little hands on with full Linux software RAID and that was about 14 years ago. I am trying to determine which to use on a rebuild in a "standard" CentOS/Xen enviroment. It seems to me that while FakeRaid is/can be completely taken care of in dom0 dmraid whereas with software raid there *might* be an option to pass that role off more granularly to the domUs and not performed by dom0 at all. I have a small number of tiny domUs that rarely change (like an OpenVPN) that are well handled just by backups and firewall based failover and don't need RAID1. Is there any feedback on where the performance and availability tweak is here?
> I have had great luck with nvidia fakeraid on RAID1, but I > see there are > preferences for software raid.I have always heard that fakeraid and software RAID perform the same. Neil -- Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net CentOS 5.4 VPS with unmetered bandwidth only $25/month! No overage charges, 7 day free trial, Google Checkout accepted
Christopher G. Stach II
2009-Dec-02 20:59 UTC
[CentOS-virt] Slightly OT: FakeRaid or Software Raid
----- "Ben M." <centos at rivint.com> wrote:> I have had great luck with nvidia fakeraid on RAID1, but I see there > are > preferences for software raid. I have very little hands on with full > Linux software RAID and that was about 14 years ago.MD RAID. I'd even opt for MD RAID over a lot of hardware implementations. This writeup summarizes a bit of why: http://jeremy.zawodny.com/blog/archives/008696.html Hardware RAID's performance is obviously going to be better, but it's only worth it if you *need* it (more than ~8 disks, parity). If you're just doing RAID 0, 1, or 10 in a single box and you're not pushing it to its limits as a DB server or benchmarking and going over it with a magnifying glass, you probably won't notice a difference in performance. I'll take fewer moving parts and portability. As someone already said, dmraid is done in software, too. "Fakeraid" is basically the same as MD RAID, but with an extra piece of hardware and extra logic bits to fail. -- Christopher G. Stach II
Christopher G. Stach II
2009-Dec-02 22:21 UTC
[CentOS-virt] Slightly OT: FakeRaid or Software Raid
----- "Ben M." <centos at rivint.com> wrote:> Thanks. The portability bonus is a big one. Just two other questions I > think. > > - Raid1 entirely in dom0?That's how I do it. dom0 should be handling the supply of the hardware services, and if dom0 has all of the drivers and physical disks under its control, it counts. dom0 should be able to handle the block scheduling better. Now, if booting guests with iSCSI disks and all the LUNs are on a SAN or otherwise networked and not directly under dom0 control, I would handle it differently. Besides, you don't want the added complexity of managing all of the LVs and extra guest configuration.> - Will RE type HDs be bad or good in this circumstance? I buy RE types > but have recently become aware of the possibility where TLER > (Time-Limited Error Recovery) can be an issue when run outside of a > Raid, e.g. alone on desktop machine.Definitely good as long as you have more than one disk in it. :) You want your disk to time out instead of holding up the array. You usually can't go wrong with NCQ and TLER. NCQ won't get you much in RAID 1, though. You can turn it off with the driver in most cases. -- Christopher G. Stach II
Christopher G. Stach II
2009-Dec-02 22:42 UTC
[CentOS-virt] Slightly OT: FakeRaid or Software Raid
----- "Grant McWilliams" <grantmasterflash at gmail.com> wrote:> He had a two drive RAID 1 drives and at least one of them failed but > he didn't have any notification software set up to let him know that > it had failed. And since that's the case he didn't know if both drives > had failed or not. I wonder why he things software RAID would be a) > more reliable b) fix itself magically without telling him. He never > did say if he was able to use the second disk. I have 75 machines with > 3ware controllers and on the very rare occasion that a controller > fails you plug in another one and boot up.I have a pile of various RAID controllers from 3ware, Promise, LSI, the utter garbage that older PERCs were, etc. that have pissed me off by randomly dropping disks, not rebuilding or detecting its own disks, hanging, etc. and very importantly, not allowing me to move disks between machines. That being said, I still have two 3ware cards and some LSI that are fine, but most of my arrays are software. Anecdotal evidence aside, unless you know what kind of performance you need, you have usage metrics, and you know how to benchmark properly, you probably don't need the risk and marginal performance improvement of some extra hardware. (This was originally about fakeraid, wasn't it?)> I don't use software RAID in any sort of production environment unless > it's RAID 0 and I don't care about the data at all. I've also tested > the speed between Hardware and Software RAID 5 and no matter how many > CPUs you throw at it the hardware will win.I don't allow RAID 5, so the increased checksum processing performance doesn't have any bearing on my choices. :)> Even in the case when a > 3ware RAID controller only has one drive plugged in it will beat a > single drive plugged into the motherboard if applications are > requesting dissimilar data. One stream from an MD0 RAID 0 will be as > fast as one stream from a Hardware RAID 0. Multiple streams of > dissimilar data will be much faster on the Hardware RAID controller > due to controller caching.True. That probably falls under be the aforementioned "need". :) You can't beat the performance of a cache, although the Linux filesystem cache will perform reasonably well. If you're using something like an ACID compliant DB, you'll want the battery backed hardware cache for any sizable amount of I/O. Discussing performance without discussing benchmark methodology is annoying and often useless, but if you want to go down that route... (Also, your use case is contrary to your storage design. You would use RAID 0 for serial access and something parallel for random access. Yes, a Ford Explorer doesn't hug corners at 120 mph.) -- Christopher G. Stach II
Christopher G. Stach II
2009-Dec-03 05:48 UTC
[CentOS-virt] Slightly OT: FakeRaid or Software Raid
----- "Grant McWilliams" <grantmasterflash at gmail.com> wrote:> Interesting thoughts on raid5 although I doubt many would agree.That's okay. We all have our off days... Here's some quality reading: http://blogs.sun.com/bonwick/entry/raid_z http://www.cyberciti.biz/tips/raid5-vs-raid-10-safety-performance.html http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt http://www.miracleas.com/BAARF/1.Millsap2000.01.03-RAID5.pdf http://www.codinghorror.com/blog/archives/001233.html http://web.ivy.net/carton/rant/ml/raid-raid5writehole-0.html Maybe you are thinking of RAID 6.> I don't see how the drive type has ANYTHING to do with the RAID > level.IOPS, bit error ratio, bus speed, and spindle speed tend to factor in and are usually governed by the drive type. (The BER is very important for how often you can expect the data elves come out and chew on your data during RAID 5 rebuilds.) You will use those numbers to calculate the number of stripe segments, controllers, and disks. Combine that with the controller's local bus, number of necessary controllers, host bus, budget, and other business requirements and you have a RAID type.> a RAID 10 (or 0+1) will never reach the write... performance of > a RAID-5.(*cough* If you keep the number of disks constant or the amount of usable space? "Things working" tends to trump CapEx, despite the associated pain, so I will go with "amount of usable space.") No. -- Christopher G. Stach II
Christopher G. Stach II
2009-Dec-03 14:08 UTC
[CentOS-virt] Slightly OT: FakeRaid or Software Raid
----- "Grant McWilliams" <grantmasterflash at gmail.com> wrote:> On Wed, Dec 2, 2009 at 9:48 PM, Christopher G. Stach II < > cgs at ldsys.net > wrote: > > ----- "Grant McWilliams" < grantmasterflash at gmail.com > wrote: > > > a RAID 10 (or 0+1) will never reach the write... performance of > > a RAID-5. > > (*cough* If you keep the number of disks constant or the amount of > usable space? "Things working" tends to trump CapEx, despite the > associated pain, so I will go with "amount of usable space.") > > No. > > -- > Christopher G. Stach II > > Nice quality reading. I like theories as much as the next person but > I'm wondering if the Toms Hardware guys are on crack or you disapprove > of their testing methods. > > http://www.tomshardware.com/reviews/external-raid-storage,1922-9.htmlThey used a constant number of disks to compare two different hardware implementations, not to compare RAID 5 vs. RAID 10. They got the expected ~50% improvement from the extra stripe segment in RAID 5 with a serial access pattern. Unfortunately, that's neither real world use nor the typical way you would fulfill requirements. If you read ahead to the following pages, you have a nice comparison of random access patterns and RAID 10 coming out ahead (with one less stripe segment and a lot less risk): http://www.tomshardware.com/reviews/external-raid-storage,1922-11.html http://www.tomshardware.com/reviews/external-raid-storage,1922-12.html -- Christopher G. Stach II
Christopher G. Stach II
2009-Dec-03 21:50 UTC
[CentOS-virt] Slightly OT: FakeRaid or Software Raid
----- "Grant McWilliams" <grantmasterflash at gmail.com> wrote:> RAID 5 is faster than RAID 10 for reads and writes.*Serial* reads and writes. That is not the access pattern that you will have in most virtualization hosts.> What wasn't in the test (but is in others that they've done) is RAID > 6. I'm not sure I'm sold on it because it gives us about the same > level of redundancy as RAID 10 but with less performance than RAID 5. > Theoretically it would get soundly trounced by RAID 10 on IOs and > maybe be slower on r/w transfer as well.RAID 6 is pretty slow, but you can stripe them as RAID 60. If you need that kind of fault tolerance, the performance hit is negligible. On high volume boxes with low performance requirements, say NLS on an 8-12 bay 2U or 3U machine, I use RAID 6 with one hot spare. -- Christopher G. Stach II