On 2/27/2015 8:00 PM, Marko Vojinovic wrote:> And this is why I don't like LVM to begin with. If one of the drives > dies, you're screwed not only for the data on that drive, but even for > data on remaining healthy drives.with classic LVM, you were supposed to use raid for your PV's. The new LVM in 6.3+ has integrated raid at an LV level, you just have to declare all your LVs with appropriate raid levels. -- john r pierce 37N 122W somewhere on the middle of the left coast
On Fri, Feb 27, 2015 at 9:44 PM, John R Pierce <pierce at hogranch.com> wrote:> On 2/27/2015 8:00 PM, Marko Vojinovic wrote: >> >> And this is why I don't like LVM to begin with. If one of the drives >> dies, you're screwed not only for the data on that drive, but even for >> data on remaining healthy drives. > > > with classic LVM, you were supposed to use raid for your PV's. The new LVM > in 6.3+ has integrated raid at an LV level, you just have to declare all > your LVs with appropriate raid levels.I think since inception of LVM2, type mirror has been available which is now legacy (but still available). The current type since CentOS 6.3 is raid1. But yes for anything raid4+ you previously had to create it with mdadm or use hardware RAID (which of course you can still do, most people still prefer managing software raid with mdadm than lvm's tools). -- Chris Murphy
And then Btrfs (no LVM). mkfs.btrfs -d single /dev/sd[bcde] mount /dev/sdb /mnt/bigbtr cp -a /usr /mnt/bigbtr Unmount. Poweroff. Kill 3rd of 4 drives. Poweron. mount -o degraded,ro /dev/sdb /mnt/bigbtr ## degraded,ro is required or mount fails cp -a /mnt/bigbtr/usr/ /mnt/btrfs ## copy to a different volume No dmesg errors. Bunch of I/O errors only when it was trying to copy data on the 3rd drive. But it continues. # du -sh /mnt/btrfs/usr 2.5G usr Exactly 1GB was on the missing drive. So I recovered everything that wasn't on that drive. One gotcha that applies to all three fs's that I'm not testing: in-use drive failure. I'm simulate drive failure by first cleanly unmounting and powering off. Super ideal. How the file system and anything underneath it (LVM and maybe RAID) handles drive failures while in use, is a huge factor. Chris Murphy