Vincent
2013-May-13 14:15 UTC
Remove a materially failed device from a Btrfs "single-raid" using partitions
Hello, I am on Ubuntu Server 13.04 with Linux 3.8. I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard drives has failed, I mean it''s materially dead. :~$ sudo btrfs filesystem show Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 Total devices 5 FS bytes used 226.90GB devid 4 size 37.27GB used 31.01GB path /dev/sdd1 devid 3 size 37.27GB used 31.01GB path /dev/sdc1 devid 2 size 37.31GB used 31.00GB path /dev/sdb1 devid 1 size 139.73GB used 132.02GB path /dev/sda3 *** Some devices missing Many tutorials I found about it never mention the simple deletion of a non-remountable disk in case of a "single-raid" (where the datas doesn''t matter, I''ve used the only "d=single" option, insinuating "m=mirrored"). I''ve read this page http://www.howtoforge.com/a-beginners-guide-to-btrfs until "8 Adding/Deleting Hard Drives To/From A btrfs File System" section. But this page want to make me mount the drive, but it''s dead. When my Btrfs partition is not mounted and when I do: :~$ sudo btrfs device delete missing btrfs device delete: too few arguments or :~$ sudo btrfs device delete missing /media/single-raid/ Nothing happen. If I try to mount the failed device, and remove /dev/sde1 from the mountpoint, my console doesn''t respond anymore. I''ve also read the official documentation https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Removing_devices using degraded mode: mount -o degraded /dev/sda3 /media/single-raid/ The fstab line is however: /dev/sda3 /media/single-raid/ btrfs device=/dev/sda3,device=/dev/sdb1,device=/dev/sdc1,device=/dev/sdd1 0 2 Then perform :~$ sudo btrfs filesystem show Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 Total devices 5 FS bytes used 226.30GB devid 4 size 37.27GB used 31.01GB path /dev/sdd1 devid 3 size 37.27GB used 31.01GB path /dev/sdc1 devid 2 size 37.31GB used 31.00GB path /dev/sdb1 devid 1 size 139.73GB used 132.02GB path /dev/sda3 **** Some devices missing* I don''t understand why in degraded mode I can''t remove the failed device. Could you help me please? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Harald Glatt
2013-May-13 14:29 UTC
Re: Remove a materially failed device from a Btrfs "single-raid" using partitions
On Mon, May 13, 2013 at 4:15 PM, Vincent <vincent@influence-pc.fr> wrote:> Hello, > > I am on Ubuntu Server 13.04 with Linux 3.8. > > I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard > drives has failed, I mean it''s materially dead. > > :~$ sudo btrfs filesystem show > Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 > Total devices 5 FS bytes used 226.90GB > devid 4 size 37.27GB used 31.01GB path /dev/sdd1 > devid 3 size 37.27GB used 31.01GB path /dev/sdc1 > devid 2 size 37.31GB used 31.00GB path /dev/sdb1 > devid 1 size 139.73GB used 132.02GB path /dev/sda3 > *** Some devices missing > > > Many tutorials I found about it never mention the simple deletion of a > non-remountable disk in case of a "single-raid" (where the datas doesn''t > matter, I''ve used the only "d=single" option, insinuating "m=mirrored"). > > I''ve read this page http://www.howtoforge.com/a-beginners-guide-to-btrfs > until "8 Adding/Deleting Hard Drives To/From A btrfs File System" section. > But this page want to make me mount the drive, but it''s dead. > > When my Btrfs partition is not mounted and when I do: > :~$ sudo btrfs device delete missing > btrfs device delete: too few arguments > or > :~$ sudo btrfs device delete missing /media/single-raid/ > Nothing happen. > > If I try to mount the failed device, and remove /dev/sde1 from the > mountpoint, my console doesn''t respond anymore. > > I''ve also read the official documentation > https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Removing_devices > using degraded mode: mount -o degraded /dev/sda3 /media/single-raid/ > > The fstab line is however: /dev/sda3 /media/single-raid/ > btrfs device=/dev/sda3,device=/dev/sdb1,device=/dev/sdc1,device=/dev/sdd1 0 > 2 > > Then perform :~$ sudo btrfs filesystem show > Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 > Total devices 5 FS bytes used 226.30GB > devid 4 size 37.27GB used 31.01GB path /dev/sdd1 > devid 3 size 37.27GB used 31.01GB path /dev/sdc1 > devid 2 size 37.31GB used 31.00GB path /dev/sdb1 > devid 1 size 139.73GB used 132.02GB path /dev/sda3 > **** Some devices missing* > > I don''t understand why in degraded mode I can''t remove the failed device. > Could you help me please? > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.htmlIf you have used d=single, it means that the data that was on the drive that failed is now gone. I think btrfs refuses to remove devices if it means data loss, but I could be wrong here.. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Vincent
2013-May-13 14:33 UTC
Re: Remove a materially failed device from a Btrfs "single-raid" using partitions
Le 13/05/2013 16:29, Harald Glatt a écrit :> On Mon, May 13, 2013 at 4:15 PM, Vincent <vincent@influence-pc.fr> wrote: >> Hello, >> >> I am on Ubuntu Server 13.04 with Linux 3.8. >> >> I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard >> drives has failed, I mean it''s materially dead. >> >> :~$ sudo btrfs filesystem show >> Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 >> Total devices 5 FS bytes used 226.90GB >> devid 4 size 37.27GB used 31.01GB path /dev/sdd1 >> devid 3 size 37.27GB used 31.01GB path /dev/sdc1 >> devid 2 size 37.31GB used 31.00GB path /dev/sdb1 >> devid 1 size 139.73GB used 132.02GB path /dev/sda3 >> *** Some devices missing >> >> >> Many tutorials I found about it never mention the simple deletion of a >> non-remountable disk in case of a "single-raid" (where the datas doesn''t >> matter, I''ve used the only "d=single" option, insinuating "m=mirrored"). >> >> I''ve read this page http://www.howtoforge.com/a-beginners-guide-to-btrfs >> until "8 Adding/Deleting Hard Drives To/From A btrfs File System" section. >> But this page want to make me mount the drive, but it''s dead. >> >> When my Btrfs partition is not mounted and when I do: >> :~$ sudo btrfs device delete missing >> btrfs device delete: too few arguments >> or >> :~$ sudo btrfs device delete missing /media/single-raid/ >> Nothing happen. >> >> If I try to mount the failed device, and remove /dev/sde1 from the >> mountpoint, my console doesn''t respond anymore. >> >> I''ve also read the official documentation >> https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Removing_devices >> using degraded mode: mount -o degraded /dev/sda3 /media/single-raid/ >> >> The fstab line is however: /dev/sda3 /media/single-raid/ >> btrfs device=/dev/sda3,device=/dev/sdb1,device=/dev/sdc1,device=/dev/sdd1 0 >> 2 >> >> Then perform :~$ sudo btrfs filesystem show >> Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 >> Total devices 5 FS bytes used 226.30GB >> devid 4 size 37.27GB used 31.01GB path /dev/sdd1 >> devid 3 size 37.27GB used 31.01GB path /dev/sdc1 >> devid 2 size 37.31GB used 31.00GB path /dev/sdb1 >> devid 1 size 139.73GB used 132.02GB path /dev/sda3 >> **** Some devices missing* >> >> I don''t understand why in degraded mode I can''t remove the failed device. >> Could you help me please? >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html > If you have used d=single, it means that the data that was on the > drive that failed is now gone. I think btrfs refuses to remove devices > if it means data loss, but I could be wrong here..I''ve no problem with data loss, I just want to have a kind of sharing area, but data on other drive are always here, why couldn''t I retrieve them alive? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Harald Glatt
2013-May-13 14:41 UTC
Re: Remove a materially failed device from a Btrfs "single-raid" using partitions
On Mon, May 13, 2013 at 4:33 PM, Vincent <vincent@influence-pc.fr> wrote:> Le 13/05/2013 16:29, Harald Glatt a écrit : > >> On Mon, May 13, 2013 at 4:15 PM, Vincent <vincent@influence-pc.fr> wrote: >>> >>> Hello, >>> >>> I am on Ubuntu Server 13.04 with Linux 3.8. >>> >>> I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard >>> drives has failed, I mean it''s materially dead. >>> >>> :~$ sudo btrfs filesystem show >>> Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 >>> Total devices 5 FS bytes used 226.90GB >>> devid 4 size 37.27GB used 31.01GB path /dev/sdd1 >>> devid 3 size 37.27GB used 31.01GB path /dev/sdc1 >>> devid 2 size 37.31GB used 31.00GB path /dev/sdb1 >>> devid 1 size 139.73GB used 132.02GB path /dev/sda3 >>> *** Some devices missing >>> >>> >>> Many tutorials I found about it never mention the simple deletion of a >>> non-remountable disk in case of a "single-raid" (where the datas doesn''t >>> matter, I''ve used the only "d=single" option, insinuating "m=mirrored"). >>> >>> I''ve read this page http://www.howtoforge.com/a-beginners-guide-to-btrfs >>> until "8 Adding/Deleting Hard Drives To/From A btrfs File System" >>> section. >>> But this page want to make me mount the drive, but it''s dead. >>> >>> When my Btrfs partition is not mounted and when I do: >>> :~$ sudo btrfs device delete missing >>> btrfs device delete: too few arguments >>> or >>> :~$ sudo btrfs device delete missing /media/single-raid/ >>> Nothing happen. >>> >>> If I try to mount the failed device, and remove /dev/sde1 from the >>> mountpoint, my console doesn''t respond anymore. >>> >>> I''ve also read the official documentation >>> >>> https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Removing_devices >>> using degraded mode: mount -o degraded /dev/sda3 /media/single-raid/ >>> >>> The fstab line is however: /dev/sda3 /media/single-raid/ >>> btrfs device=/dev/sda3,device=/dev/sdb1,device=/dev/sdc1,device=/dev/sdd1 >>> 0 >>> 2 >>> >>> Then perform :~$ sudo btrfs filesystem show >>> Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 >>> Total devices 5 FS bytes used 226.30GB >>> devid 4 size 37.27GB used 31.01GB path /dev/sdd1 >>> devid 3 size 37.27GB used 31.01GB path /dev/sdc1 >>> devid 2 size 37.31GB used 31.00GB path /dev/sdb1 >>> devid 1 size 139.73GB used 132.02GB path /dev/sda3 >>> **** Some devices missing* >>> >>> I don''t understand why in degraded mode I can''t remove the failed device. >>> Could you help me please? >>> -- >>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in >>> the body of a message to majordomo@vger.kernel.org >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >> >> If you have used d=single, it means that the data that was on the >> drive that failed is now gone. I think btrfs refuses to remove devices >> if it means data loss, but I could be wrong here.. > > > I''ve no problem with data loss, I just want to have a kind of sharing area, > but data on other drive are always here, why couldn''t I retrieve them alive? > > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.htmlI think it''s a safety mechanism, you''ll have to wait until a dev responds... Otherwise you can always rsync the data into a new filesystem and replace the old one if you can''t wait. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Vincent
2013-May-13 15:30 UTC
Re: Remove a materially failed device from a Btrfs "single-raid" using partitions
Shit, I''ve followed the official documentation https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Replacing_failed_devices With, as explain: mkfs.btrfs -d single /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mount -o degraded /dev/sda3 /media/single-raid/ btrfs device delete missing /media/single-raid/ And now all my datas on ALL drive has been deleted. How could I retrieve datas on my Btrfs formatted partition now?! -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2013-May-13 17:21 UTC
Re: Remove a materially failed device from a Btrfs "single-raid" using partitions
On May 13, 2013, at 9:30 AM, Vincent <vincent@influence-pc.fr> wrote:> Shit, I''ve followed the official documentation https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Replacing_failed_devices > > With, as explain: > mkfs.btrfs -d single /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1You did this originally, not just a couple hours ago, right? Because if you did it again, of course you''ve deleted all of your data by creating a new file system.> > mount -o degraded /dev/sda3 /media/single-raid/This should work, and you should be able to rsync or copy the data off the file system onto a new one, except for data that''s on the failed drive. I''m not sure what sort of errors you get when copying data that the file system knows isn''t available.> btrfs device delete missing /media/single-raid/I don''t expect this to apply to either -d single or -d raid0. It doesn''t actually have any benefit or meaning. I''d say at best, my expectation is you could copy out the surviving data, in degraded mode, out of the btrfs file system onto a new file system. I do not expect it''s possible to replace the dead drive and go happily along with just the missing data being lost. You''ll have to create a new file system at some point.> And now all my datas on ALL drive has been deleted. How could I retrieve datas on my Btrfs formatted partition now?!If you really formatted it, I''d expect this. If that was an example of the original mkfs, I''d expect degraded should allow access to data on working drives. Maybe read/write is possible, but personally I''d prefer the file system in such a degraded state to be read only. And get the surviving data off. But I am not a Btrfs developer. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Vincent
2013-May-14 15:30 UTC
Re: Remove a materially failed device from a Btrfs "single-raid" using partitions
Why is it more easier to extend a Brfs partition rather than to reduce it, while the single raid is supposed to be made for that? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2013-May-14 16:30 UTC
Re: Remove a materially failed device from a Btrfs "single-raid" using partitions
On May 14, 2013, at 9:30 AM, Vincent <vincent@influence-pc.fr> wrote:> Why is it more easier to extend a Brfs partition rather than to reduce it, while the single raid is supposed to be made for that?Are you referring to normal operation or your degraded case? In normal operation, yes you can reduce it within the limits of the physical devices. Expecting anything other than getting the surviving data off a -d single volume, I think, is unreasonable. If Btrfs even lets you extract the surviving data off the -d single volume, insofar as I''m aware that''s better than anything else presently offers. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html