How well does btrfs perform across a mix of: 1 SSD and 1 HDD for ''raid'' 1 mirror for both data and metadata? Similarly so across 2 SSDs and 2 HDDs (4 devices)? Can multiple (small) SSDs be ''clustered'' as one device and then mirrored with one large HDD with btrfs directly? (Other than using lvm...) The idea is to gain the random access speed of the SSDs but have the HDDs as backup in case the SSDs fail due to wear... The usage is to support a few hundred Maildirs + imap for users that often have many thousands of emails in the one folder for their inbox... (And no, the users cannot be trained to clean out their inboxes or to be more hierarchically tidy... :-( ) Or is btrfs yet too premature to suffer such use? Regards, Martin -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 01/05/12 20:35, Martin wrote:> The idea is to gain the random access speed of the SSDs but have the > HDDs as backup in case the SSDs fail due to wear...Have you looked at the bcache project http://bcache.evilpiepirate.org/ sam -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 01/05/12 22:16, sam tygier wrote:> On 01/05/12 20:35, Martin wrote: > >> The idea is to gain the random access speed of the SSDs but have the >> HDDs as backup in case the SSDs fail due to wear... > > Have you looked at the bcache project http://bcache.evilpiepirate.org/Many thanks for that. See also the latest on: http://news.gmane.org/gmane.linux.kernel.bcache.devel Looks rather interesting, but also too scary to be let loose on real users until tested beyond staging. Excellent idea. Full kudos for further development! Regards, Martin -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/01/2012 09:35 PM, Martin wrote:> How well does btrfs perform across a mix of: > > 1 SSD and 1 HDD for ''raid'' 1 mirror for both data and metadata? > > Similarly so across 2 SSDs and 2 HDDs (4 devices)? > > Can multiple (small) SSDs be ''clustered'' as one device and then mirrored > with one large HDD with btrfs directly? (Other than using lvm...) > > > The idea is to gain the random access speed of the SSDs but have the > HDDs as backup in case the SSDs fail due to wear... > > The usage is to support a few hundred Maildirs + imap for users that > often have many thousands of emails in the one folder for their inbox... > > > (And no, the users cannot be trained to clean out their inboxes or to be > more hierarchically tidy... :-( ) > > Or is btrfs yet too premature to suffer such use? >From Kconfig: "Btrfs filesystem (EXPERIMENTAL) Unstable disk format" ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^ Btrfs is too immature to use in ANY kind of production-like scenario where you cannot afford to lose a certain amount of data (i.e. be forced to restore from backup) AND suffer downtime. I don''t think email users are going to be thrilled about the prospect of "lossy" email. (Not that the other questions aren''t valid.) Regards, -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, May 2, 2012 at 9:22 AM, Bardur Arantsson <spam@scientician.net> wrote:> On 05/01/2012 09:35 PM, Martin wrote: >> >> How well does btrfs perform across a mix of: >> >> 1 SSD and 1 HDD for ''raid'' 1 mirror for both data and metadata?>> The idea is to gain the random access speed of the SSDs but have the >> HDDs as backup in case the SSDs fail due to wear...AFAIK only zfs officially supports that configuration, using L2ARC and SLOG>> >> The usage is to support a few hundred Maildirs + imap for users that >> often have many thousands of emails in the one folder for their inbox...Some mail programs uses hardlinks, and btrfs has a low limit on maximum number of hardlinks in a directory. If you use one of those programs, better stay away for now. Plus, from my experience, when using the same disk, btrfs will use up more disk I/O compared to ext4, so if you''re already I/O-starved, better stick with ext4.>> Or is btrfs yet too premature to suffer such use? >> > > From Kconfig: > > "Btrfs filesystem (EXPERIMENTAL) Unstable disk format" > ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^ > > Btrfs is too immature to use in ANY kind of production-like scenario where > you cannot afford to lose a certain amount of data (i.e. be forced to > restore from backup) AND suffer downtime. > > I don''t think email users are going to be thrilled about the prospect of > "lossy" email.Oracle fully supports btrfs for production environment: http://oss.oracle.com/ol6/docs/RELEASE-NOTES-UEK2-en.html http://www.zdnet.com/blog/open-source/oracles-unbreakable-enterprise-kernel-2-arrives-with-linux-30-kernel-btrfs/10588 http://www.oracle.com/us/technologies/linux/index.html -- Fajar -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/02/2012 06:28 AM, Fajar A. Nugraha wrote:> On Wed, May 2, 2012 at 9:22 AM, Bardur Arantsson<spam@scientician.net> wrote: >> On 05/01/2012 09:35 PM, Martin wrote: >>> >> >> From Kconfig: >> >> "Btrfs filesystem (EXPERIMENTAL) Unstable disk format" >> ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^ >> >> Btrfs is too immature to use in ANY kind of production-like scenario where >> you cannot afford to lose a certain amount of data (i.e. be forced to >> restore from backup) AND suffer downtime. >> >> I don''t think email users are going to be thrilled about the prospect of >> "lossy" email. > > Oracle fully supports btrfs for production environment: > http://oss.oracle.com/ol6/docs/RELEASE-NOTES-UEK2-en.html > http://www.zdnet.com/blog/open-source/oracles-unbreakable-enterprise-kernel-2-arrives-with-linux-30-kernel-btrfs/10588 > http://www.oracle.com/us/technologies/linux/index.html >What does "fully supports" mean? Does it mean that it''s actually stable (considerably more stable that mainline), or does it mean that you can pay them to help fix a broken FS, for example? Does the included btrfsck actually work reliably? Is there some non-legalese official statement of what, exactly, "fully supported" means and whether OL''s btrfs falls under this rubric? Also, AFAIUI the 3.0.x kernels (which OL claims to use in the release notes) are woefully outdated wrt. btrfs reliability/stability. Have all the more recent stability improvements been backported? Is the OP using Oracle Linux? Given the semi-regular posts about FS corruption on this list(*) and the "EXPERIEMENTAL" status in the KConfig it would be unwise to use btrfs for anything called "production" (unless you can actually afford downtime/data loss). (*) I want to make clear that the developers on this list always seem incredibly responsive and helpful when these posts occur, but it''s still an indication of not-readiness for "production". File systems always take quite a long time to mature, it''s just the nature of the beast. Regards, -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, May 2, 2012 at 12:00 PM, Bardur Arantsson <spam@scientician.net> wrote:> On 05/02/2012 06:28 AM, Fajar A. Nugraha wrote:>>> From Kconfig: >>> >>> "Btrfs filesystem (EXPERIMENTAL) Unstable disk format" >>> ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^ >>> >>> Btrfs is too immature to use in ANY kind of production-like scenario >>> where >>> you cannot afford to lose a certain amount of data (i.e. be forced to >>> restore from backup) AND suffer downtime. >>> >>> I don''t think email users are going to be thrilled about the prospect of >>> "lossy" email. >> >> >> Oracle fully supports btrfs for production environment: >> http://oss.oracle.com/ol6/docs/RELEASE-NOTES-UEK2-en.html >> >> http://www.zdnet.com/blog/open-source/oracles-unbreakable-enterprise-kernel-2-arrives-with-linux-30-kernel-btrfs/10588 >> http://www.oracle.com/us/technologies/linux/index.html >> > > What does "fully supports" mean? Does it mean that it''s actually stable > (considerably more stable that mainline), or does it mean that you can pay > them to help fix a broken FS, for example? Does the included btrfsck > actually work reliably? Is there some non-legalese official statement of > what, exactly, "fully supported" means and whether OL''s btrfs falls under > this rubric?That question would be best addressed to Oracle directly. Or other distro vendors supporting btrfs (IIRC SLES also supports it).> > Also, AFAIUI the 3.0.x kernels (which OL claims to use in the release notes) > are woefully outdated wrt. btrfs reliability/stability. Have all the more > recent stability improvements been backported?Chris or other devs from oracle might be able to comment more on that. I know that it''s quite common for an OSS vendor to have a supported version of something, based on a version that is more thoroughly tested, and have another version (in this case the version of btrfs in mainline) that has newer, bleeding-edge code, with more features, but possibly also more bugs.> > Is the OP using Oracle Linux?He didn''t say. But he didn''t say he WON''T be using oracle linux (or other distro which supports btrfs) either. Plus the kernel can be installed on top of RHEL/Centos 5 and 6, so he can easily choose either the supported version, or the mainline version, each with its own consequences.> Given the semi-regular posts about FS corruption on this list(*) and the > "EXPERIEMENTAL" status in the KConfig it would be unwise to use btrfs for > anything called "production" (unless you can actually afford downtime/data > loss).Fair opinion. Personally I''m quite happy with the version that is included in Ubuntu Precise (kernel 3.2). It has actually helped me recover from a bad SSD. It was a somewhat old SSD, and about 1GB (out of 50GB) data becomes unreadable (reading directly from the block device). "btrfs scrub" was helpful enough to help me find out which files are corrupted, something I wouldn''t be able to do with ext4. -- Fajar -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Thanks for good comments.>> Is the OP using Oracle Linux? > > He didn''t say. But he didn''t say he WON''T be using oracle linux (or > other distro which supports btrfs) either. Plus the kernel can be > installed on top of RHEL/Centos 5 and 6, so he can easily choose > either the supported version, or the mainline version, each with its > own consequences.For further info: Nope, not using Oracle Linux. Then again, I''m reasonably distro agnostic. I''m also happy to compile my own kernels. And the system in question uses a HDD RAID and looks to be more IOPS bound rather than suffering actual IO data rate bound. The large directories certainly don''t help! It''s running postfix + courier-imap at the moment and I''m looking to revamp it for the gradually ever increasing workload. CPU and RAM usage is low on average. It serves 2x Gbit networks + internet users (3 NIC ports). Hence I''m considering the best way for an revamp/upgrade. SSDs would certainly help with the IOPS but I''m cautious about SSD wear-out for a system that constantly thrashes through a lot of data. I could just throw more disks at it to divide up the IO load. Multiple pairs of "HDD paired with SSD on md RAID 1 mirror" is a thought with ext4... bcache looks ideal to help but also looks too ''experimental''. And I was hoping that btrfs would help with handling the large directories and multi-user parallel accesses, especially so for being ''mirrored'' by btrfs itself (at the filesystem level) across 4 disks for example. Thoughts welcomed. Is btrfs development at the ''optimising'' stage now, or is it all still very much a ''work in progress''? Regards, Martin -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Martin posted on Wed, 02 May 2012 15:00:59 +0100 as excerpted:> Multiple pairs of "HDD paired with SSD on md RAID 1 mirror" is a thought > with ext4...FWIW, I was looking at disk upgrades for my (much different use case) home workstation a few days ago, and the thought of raid1 across SSD and "spinning rust" drives occurred here, too. It''s an interesting idea... that I too would love some informed commentary on whether it''s practically viable or not.> And I was hoping that btrfs would help with handling the large > directories and multi-user parallel accesses, especially so for being > ''mirrored'' by btrfs itself (at the filesystem level) across 4 disks for > example.Do you mean 4-way-mirroring? btrfs doesn''t do that yet. One thing keeping me off of btrfs ATM (besides it still being rather more experimental than I had thought from the various news I had read, before I started looking closely) is that its so-called raid1 mode really isn''t (ATM) raid1 (in the normal sense) at all, but rather, strict two-way (only) mirroring. If you throw more than two devices at btrfs and tell it to raid1 them, it''ll stagger the two-way-mirroring, it will NOT N-way mirror except N=2. My current use-case is four aging seagate 300 gig sata conventional "spinning rust" drives, in multiple (mostly) 4-way md/raid1s. I /could/ upgrade drives if I needed to (thus the interest in disk upgrades and the thought of SSD/rust mixed raid1, mentioned above), but am looking at continuing to use the existing hardware, as well, and aging as they are, I simply don''t trust two-way-only mirroring, at this point, as having a second device fail before I fully recovered from replacing the first is a realistic possibility, at this point. 3-way would be acceptable, but btrfs doesn''t do that yet. At least 3-way and possibly N-way mirroring is on the btrfs roadmap, to be introduced after raid5/6 as it''ll build on that code. The raid5/6 code was in turn roadmapped for after a writing btrfsck, which is now available but still being worked on. So hopefully, raid5/6 for kernel 3.5, and with luck, 3-way/N-way raid1/mirroring could land in 3.6.> Thoughts welcomed. > > > Is btrfs development at the ''optimising'' stage now, or is it all still > very much a ''work in progress''?As the above might hint, btrfs is still a work-in-progress. Only since March has there been a btrfsck that could do any more than report errors, and using it to actually correct errors still comes with a warning that it could actually make them worse, instead, so is discouraged except for testing purposes. The basic btrfs itself is in somewhat better shape, but its most mature and well tested code is single device, or multiple stable devices, used with LOTS of free space left for "normal" usage, not stuff like databases where there''s lots of modify a few bytes in the middle of a huge file sort of activity going on. For that use case, btrfs is "sort of" stable, stable enough that it''s being deployed by some distributions. The common errors reported now seem to be ENOSPC under filesystem stress conditions, problems dealing with checksum errors during filesystem scrub and the like (as with btrfsck, errors are found easily enough, repairing them remains problematic at times, however), and notably of interest for mirrored usage (so for both you and I), problems recovering from loss of one of the two mirror copies. (At least some of this last one is actually a subcase of the checksum recovery issues, since the problem often appears as checksum issues on the remaining copy.) So while btrfs /might/ be argued to be reasonably stable for single device or multi-device home use where the devices remain stable and where the level of filesystem stress isn''t too great, it''s /not/ well suited to use-cases for which (other than striped-raid0) RAID would normally be considered, that is, where the R/Redundant bit comes into play, since recovery from loss/replacement of a "redundant" device on btrfs still all too often demonstrates that the device wasn''t actually "redundant" after all, and its loss often results in not only lost data, but a damaged btrfs that''s impossible to fully recover in btrfs current state, as well. And as I said above, features are still actively being added -- it''s not yet feature-complete even to the originally defined feature-set (the brand new and still very much testing-only fixing btrfsck being just one example, that normally being considered a pretty basic requirement for any decent filesystem). By traditional definition, then, btrfs is "alpha software", not yet feature complete. Basically, that means btrfs is still what it says on the kernel config option label, experimental. Under the basic stable device low-filesystem- stress scenario, it''s getting close to stable, except for the still testing level of btrfsck. For anything beyond that, I''d definitely say wait, for now, but in 2-3 more kernel releases, say toward end-of-year or early next, the then-current outlook should be much better, to the point that it should start looking realistic for at least the early adopters with backups who are willing to risk having to use them. Meanwhile, for current usage, it is said that a wise admin always has backups, no matter the filesystem stability/maturity. But for btrfs in current experimental state, that''s really not good enough. Ideally, anyone using btrfs now, will what they consider their primary data copy, with its normal backups, on something other than btrfs. The copy they''re testing with on btrfs, then, will be exactly that, a throw-away, testing copy, not even the primary copy of the data, with backups, but a copy made specifically for testing btrfs, that is really is considered throw- away, possibly missing the next time you go to use it, but no big deal because it wasn''t the main copy anyway, let alone the backup, just the throw-away copy one was testing with that they half expected to be eaten by that test anyway. -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Il 02/05/2012 20:41, Duncan ha scritto:> Martin posted on Wed, 02 May 2012 15:00:59 +0100 as excerpted: > >> Multiple pairs of "HDD paired with SSD on md RAID 1 mirror" is a thought >> with ext4... > FWIW, I was looking at disk upgrades for my (much different use case) > home workstation a few days ago, and the thought of raid1 across SSD and > "spinning rust" drives occurred here, too. It''s an interesting idea... > that I too would love some informed commentary on whether it''s > practically viable or not.I''ve a similar setup, it''s a 2xSSD + 1xHD, but cannot provide real data right now. Maybe next month. One thing I''ve forgot to mention is that software raid is very flexible and it''s very possible to do a raid0 of ssd and then combine it in a raid1 with one (or more) traditional HD. given the kind of access (many small files) I''m not sure a raid0 is the best solution, to be really effective a raid0 need files (and access to these) bigger than stripe size.>> And I was hoping that btrfs would help with handling the large >> directories and multi-user parallel accesses, especially so for being >> ''mirrored'' by btrfs itself (at the filesystem level) across 4 disks for >> example. > Do you mean 4-way-mirroring? btrfs doesn''t do that yet. > > One thing keeping me off of btrfs ATM (besides it still being rather more > experimental than I had thought from the various news I had read, before > I started looking closely) is that its so-called raid1 mode really isn''t > (ATM) raid1 (in the normal sense) at all, but rather, strict two-way > (only) mirroring. If you throw more than two devices at btrfs and tell > it to raid1 them, it''ll stagger the two-way-mirroring, it will NOT N-way > mirror except N=2. My current use-case is four aging seagate 300 gig sata > conventional "spinning rust" drives, in multiple (mostly) 4-way > md/raid1s. I /could/ upgrade drives if I needed to (thus the interest in > disk upgrades and the thought of SSD/rust mixed raid1, mentioned above), > but am looking at continuing to use the existing hardware, as well, and > aging as they are, I simply don''t trust two-way-only mirroring, at this > point, as having a second device fail before I fully recovered from > replacing the first is a realistic possibility, at this point. 3-way > would be acceptable, but btrfs doesn''t do that yet. > > At least 3-way and possibly N-way mirroring is on the btrfs roadmap, to > be introduced after raid5/6 as it''ll build on that code. The raid5/6 > code was in turn roadmapped for after a writing btrfsck, which is now > available but still being worked on. So hopefully, raid5/6 for kernel > 3.5, and with luck, 3-way/N-way raid1/mirroring could land in 3.6. > >> Thoughts welcomed. >> >> >> Is btrfs development at the ''optimising'' stage now, or is it all still >> very much a ''work in progress''? > As the above might hint, btrfs is still a work-in-progress. Only since > March has there been a btrfsck that could do any more than report errors, > and using it to actually correct errors still comes with a warning that > it could actually make them worse, instead, so is discouraged except for > testing purposes. > > The basic btrfs itself is in somewhat better shape, but its most mature > and well tested code is single device, or multiple stable devices, used > with LOTS of free space left for "normal" usage, not stuff like databases > where there''s lots of modify a few bytes in the middle of a huge file > sort of activity going on. For that use case, btrfs is "sort of" stable, > stable enough that it''s being deployed by some distributions. > > The common errors reported now seem to be ENOSPC under filesystem stress > conditions, problems dealing with checksum errors during filesystem scrub > and the like (as with btrfsck, errors are found easily enough, repairing > them remains problematic at times, however), and notably of interest for > mirrored usage (so for both you and I), problems recovering from loss of > one of the two mirror copies. (At least some of this last one is > actually a subcase of the checksum recovery issues, since the problem > often appears as checksum issues on the remaining copy.) > > So while btrfs /might/ be argued to be reasonably stable for single > device or multi-device home use where the devices remain stable and where > the level of filesystem stress isn''t too great, it''s /not/ well suited to > use-cases for which (other than striped-raid0) RAID would normally be > considered, that is, where the R/Redundant bit comes into play, since > recovery from loss/replacement of a "redundant" device on btrfs still all > too often demonstrates that the device wasn''t actually "redundant" after > all, and its loss often results in not only lost data, but a damaged > btrfs that''s impossible to fully recover in btrfs current state, as well. > > And as I said above, features are still actively being added -- it''s not > yet feature-complete even to the originally defined feature-set (the > brand new and still very much testing-only fixing btrfsck being just one > example, that normally being considered a pretty basic requirement for > any decent filesystem). By traditional definition, then, btrfs is "alpha > software", not yet feature complete. > > Basically, that means btrfs is still what it says on the kernel config > option label, experimental. Under the basic stable device low-filesystem- > stress scenario, it''s getting close to stable, except for the still > testing level of btrfsck. For anything beyond that, I''d definitely say > wait, for now, but in 2-3 more kernel releases, say toward end-of-year or > early next, the then-current outlook should be much better, to the point > that it should start looking realistic for at least the early adopters > with backups who are willing to risk having to use them. > > Meanwhile, for current usage, it is said that a wise admin always has > backups, no matter the filesystem stability/maturity. But for btrfs in > current experimental state, that''s really not good enough. Ideally, > anyone using btrfs now, will what they consider their primary data copy, > with its normal backups, on something other than btrfs. The copy they''re > testing with on btrfs, then, will be exactly that, a throw-away, testing > copy, not even the primary copy of the data, with backups, but a copy > made specifically for testing btrfs, that is really is considered throw- > away, possibly missing the next time you go to use it, but no big deal > because it wasn''t the main copy anyway, let alone the backup, just the > throw-away copy one was testing with that they half expected to be eaten > by that test anyway. >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
vivo75@gmail.com posted on Thu, 03 May 2012 01:54:01 +0200 as excerpted:> Il 02/05/2012 20:41, Duncan ha scritto: >> Martin posted on Wed, 02 May 2012 15:00:59 +0100 as excerpted: >> >>> Multiple pairs of "HDD paired with SSD on md RAID 1 mirror" is a >>> thought with ext4... >> FWIW, I was looking at disk upgrades for my (much different use case) >> home workstation a few days ago, and the thought of raid1 across SSD >> and "spinning rust" drives occurred here, too. It''s an interesting >> idea... that I too would love some informed commentary on whether it''s >> practically viable or not. > > I''ve a similar setup, it''s a 2xSSD + 1xHD, but cannot provide real data > right now. Maybe next month. > One thing I''ve forgot to mention is that software raid is very flexible > and it''s very possible to do a raid0 of ssd and then combine it in a > raid1 with one (or more) traditional HD. > > given the kind of access (many small files) I''m not sure a raid0 is the > best solution, to be really effective a raid0 need files (and access to > these) bigger than stripe size.What occurred to me is that a lot of the cheaper SSDs aren''t particularly fast at writing, but great at reading. And of course they have the limited write-cycle issue. So what I was thinking about was setting up a raid1 with an SSD (or two in raid0 as you did, or just linear "raid"), and the "rust" drive, but configuring the "rust" drive as write-mostly, since it''s so much slower at reading anyway, and with the slower write than read of the SSDs, the write speeds wouldn''t be so terribly mismatched between the SSD and the write-mostly HD, and it should work reasonably well. That was my thought, anyway. And I''ll agree on the flexibility of software raid, especially md/raid (as opposed to dm-raid or the currently extremely limited raid choices btrfs offers). It''s also often pointed out that Linux md/raid gets far more testing in a MUCH MUCH broader testing environment, than any hardware raid could ever HOPE to match. Plus of course since hardware- wise it''s simply JBOD, if the hardware goes out, there''s no need to worry about buying new hardware that''s RAID arrangment compatible, just throw the disks in any old system with a sufficient number of attachment points, boot to Linux, load the old RAIDs, and get back to work. =:^) SATA was really a boon in that regard, since the master/slave setup of IDE was significantly inferior to SCSI, but SCSI was so much more expensive. SATA was thus the great RAID equalizer, bringing what had been expensive corporate raid solutions down to where ordinary humans could afford to run RAID on their otherwise reasonably ordinary home systems or even laptops. -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html