Marc MERLIN
2013-Aug-17 15:46 UTC
btrfs raid5 recovery with >1 half failed drive, or multiple drives kicked out at the same time.
I know the raid5 code is still new and being worked on, but I was
curious.
With md raid5, I can do this:
mdadm /dev/md7 --replace /dev/sde1
This is cool because it lets you replace a drive with bad sectors where
at least one other drive in the array has bad sectors, and the md layer
will read all drives for each stripe and write to a new drive.
The nice part is that it''ll take the working parts of each drive, and
as
long as no 2 drives have an unreadable sector for the same stripe, it
can recover.
I was curious, how does btrfs handle this situation, and more generally
drive failures, spurious multiple drive failures due to a bus blip where
you force the drives back online, and so forth?
Thanks,
Marc
--
"A mouse is a device used to point at the xterm you want to type in" -
A.S.R.
Microsoft is to operating systems ....
.... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html