Hello! I create a btrfs volume comprised of two partitions: # mkfs.btrfs -m dup -d single /dev/sdd1 /dev/sde1 metadata ist mirrored on each device, data chunks are scattered more or less randomly on one disk. a) If one disk fails, is there any chance of data recovery? b) If not, is there any advantage over a raid0 configuration. Thanks and regards, Florian -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Jul 17, 2013, at 3:24 PM, Florian Lindner <mailinglists@xgm.de> wrote:> > a) If one disk fails, is there any chance of data recovery?Slim to none it seems so far. Maybe with more specialized tools.> b) If not, is there any advantage over a raid0 configuration.raid0 allocates equal chunks, so in order to maximize usable space the devices need to be the same size or space is wasted (not used). single allocated chunks based on availability so it''s possible to maximally use all space on all devices even if they''re significantly different in size. Chris-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 17/07/13 14:24, Florian Lindner wrote:> metadata ist mirrored on each device, data chunks are scattered more or > less randomly on one disk. > > a) If one disk fails, is there any chance of data recovery? b) If not, > is there any advantage over a raid0 configuration.I was using that exact configuration when one disk failed (2 x 2TB Seagate drives). The data was backed up in multiple ways, a lot of it was in source control systems and the remainder was generated information. Essentially the risk was worth taking since nothing would be lost. One drive gave up mechanically - the controller still worked and it was fun running SMART tests and having huge amounts of red text show up in response. The initial symptoms were that various programs crashed or didn''t launch with no diagnostics. That is typical behaviour for Linux apps when they get I/O errors on reads and writes. Eventually I figured out the problem, and bought a new 4TB drive to replace both originals and started recovery. Out of ~750GB of original data I could recover just over 2GB which represented files whose entire contents were on the unfailed drive. Having the metadata duplicated was however immensely helpful and I could easily get a list of all directories and filenames, and used that to guide what data I recovered/regenerated/reinstalled/checked out. Meanwhile the performance improvement by having the data scattered across both drives was noticeable. I would often see it in iostat roughly evenly balanced. Roger -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iEYEARECAAYFAlHnTDgACgkQmOOfHg372QSTJwCeI17B4QhstkM4nnO0qOMDB1ae WfwAoOBu6lBwZ+GyFwnZVGXC5ki7Oge/ =i+YN -----END PGP SIGNATURE----- -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jul 17, 2013 at 11:24:22PM +0200, Florian Lindner wrote:> I create a btrfs volume comprised of two partitions: > > # mkfs.btrfs -m dup -d single /dev/sdd1 /dev/sde1DUP does not work on multiple devices, I assume you mean RAID1.> metadata ist mirrored on each device, data chunks are scattered more or less > randomly on one disk. > > a) If one disk fails, is there any chance of data recovery? > b) If not, is there any advantage over a raid0 configuration.Regarding data layout, raid0 will always place the small (64k) chunks on multiple devices, with single this does not happen every time and the data are spread in larger contiguous chunks on each of the device. Rewriting data makes the placement unpredictable, so it may end up random. The missing data blocks return IO error and the valid data can be read. david -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Jul 18, 2013, at 11:33 AM, David Sterba <dsterba@suse.cz> wrote:> > The missing data blocks return IO error and the valid data can be read.Sounds like if I have a degraded ''single'' volume, I can simply cp or rsync everything from that volume to another, and I''ll end up with a successful copy of the surviving data. True? Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 18/07/13 13:05, Chris Murphy wrote:> Sounds like if I have a degraded ''single'' volume, I can simply cp or > rsync everything from that volume to another, and I''ll end up with a > successful copy of the surviving data. True?Not quite. I did it with cp -a. Because all the metadata survived, cp would create the target file, but then get an i/o error on opening/reading the source file. It would print an error message, but not delete the empty target file. Consequently I ended up with loads of zero length files I had to go in and delete afterwards. I briefly looked for an rsync option to keep going on source i/o errors but didn''t find one. Roger -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iEYEARECAAYFAlHoZV4ACgkQmOOfHg372QRPFwCgob01TavS2qffBkxkuv0g9bl3 pC8An25Mgx+cRXb0Kds+GRnzaj2P0Acy =UA5J -----END PGP SIGNATURE----- -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jul 18, 2013 at 02:59:58PM -0700, Roger Binns wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 18/07/13 13:05, Chris Murphy wrote: > > Sounds like if I have a degraded ''single'' volume, I can simply cp or > > rsync everything from that volume to another, and I''ll end up with a > > successful copy of the surviving data. True? > > Not quite. I did it with cp -a. Because all the metadata survived, cp > would create the target file, but then get an i/o error on opening/reading > the source file. It would print an error message, but not delete the > empty target file. Consequently I ended up with loads of zero length files > I had to go in and delete afterwards.The odds of having an undamaged file from that process are much better for single than for RAID-0 (and aren''t affected by having tools which will cope better with IO errors -- although you''ll get more of each damaged file if you do). As the file size goes up, the odds of it being damaged increase. Hugo.> I briefly looked for an rsync option to keep going on source i/o errors > but didn''t find one. > > Roger > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.12 (GNU/Linux) > > iEYEARECAAYFAlHoZV4ACgkQmOOfHg372QRPFwCgob01TavS2qffBkxkuv0g9bl3 > pC8An25Mgx+cRXb0Kds+GRnzaj2P0Acy > =UA5J > -----END PGP SIGNATURE----- >-- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- I am an opera lover from planet Zog. Take me to your lieder. ---
On Thu, Jul 18, 2013 at 02:05:31PM -0600, Chris Murphy wrote:> > On Jul 18, 2013, at 11:33 AM, David Sterba <dsterba@suse.cz> wrote: > > > > > The missing data blocks return IO error and the valid data can be read. > > Sounds like if I have a degraded ''single'' volume, I can simply cp or > rsync everything from that volume to another, and I''ll end up with a > successful copy of the surviving data. True?True. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html