All, I am using BtrFS at home for testing on my /home filesystem. I started out with one disk (mkfs.btrfs /dev/sdb1) and then added another disk (btrfs device add /dev/sda2 /home; btrfs filesystem balance /home). I then wanted to remove my second disk, but was unable to do so, I got: btrfs: unable to go below two devices on raid1 I was told by the members of the BtrFS IRC channel that this was because my metadata was RAID1''d. To resolve this situation, I added an 8GB flash drive (my metadata was 5.99GB) and attempted to remove the drive again. It ran for a while (a long while) and eventually returned me to the prompt. It did not remove the disk -- so I ran it again... and again... and again... I am to the point where "btrfs device delete <blah> /home" returns instantly, but has no effect. The following is written to my kernel message buffer whenever I try to remove ANY device from /home: btrfs: unable to remove the only writeable device More information can be found in the attachement. I am still on my quest to remove a device from my BtrFS pool. Does anyone have any advice ? Thanks, Roy Keene
Hello. Sorry if it''s already fixed, but with 2.6.35.6-48.fc14.x86_64, when I do btrfs device delete /dev/blabla /btrfs kernel moves everything except 1 gigabyte off the device, but then fails to remove it, saying "btrfs: unable to remove the only writeable device" to dmesg. What''s even more interesting, it does that with all 3 of my devices, and I clearly have enough free space to eject one drive. What am I doing wrong? Thanks. P.S. For reference - what seems to be the same bug but from september. On Thu, Sep 9, 2010 at 2:04 AM, Roy Keene <btrfs@rkeene.org> wrote:> All, > > I am using BtrFS at home for testing on my /home filesystem. I > started out with one disk (mkfs.btrfs /dev/sdb1) and then added another disk > (btrfs device add /dev/sda2 /home; btrfs filesystem balance /home). > > I then wanted to remove my second disk, but was unable to do so, I got: > btrfs: unable to go below two devices on raid1 > > I was told by the members of the BtrFS IRC channel that this was because my > metadata was RAID1''d. > > To resolve this situation, I added an 8GB flash drive (my metadata was > 5.99GB) and attempted to remove the drive again. It ran for a while (a long > while) and eventually returned me to the prompt. It did not remove the disk > -- so I ran it again... and again... and again... > > I am to the point where "btrfs device delete <blah> /home" returns > instantly, but has no effect. The following is written to my kernel message > buffer whenever I try to remove ANY device from /home: > btrfs: unable to remove the only writeable device > > More information can be found in the attachement. > > I am still on my quest to remove a device from my BtrFS pool. Does anyone > have any advice ? > > Thanks, > Roy Keene-- This message represents the official view of the voices in my head -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Dec 13, 2010 at 2:23 PM, Paul Komkoff <i@stingr.net> wrote:> Hello.I''m curious... why everyone is ignoring me? Anyway. While trying to beat btrfs into submission I managed to make it fill my dmesg with this: [3301790.155343] block group 6435937189888 has 1073741824 bytes, 733720576 used 0 pinned 0 reserved [3301790.155349] entry offset 6436670910464, bytes 340021248, bitmap no [3301790.155697] block group has cluster?: no [3301790.155700] 0 blocks of free space at or bigger than bytes is [3301790.155705] block group 6437010931712 has 1073741824 bytes, 366215168 used 0 pinned 0 reserved [3301790.155710] entry offset 6437377146880, bytes 707526656, bitmap no [3301790.156055] block group has cluster?: no [3301790.156058] 0 blocks of free space at or bigger than bytes is ... and then, as before, fails to remove device. -- This message represents the official view of the voices in my head -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html