Hi, Just installed 3.9.0-rc2 and the latest btrfs-progs. filesystem is a 4 disk raid1 array. first, i did the following: `btrfs val start -dconvert=raid5,usage=1` to convert the mostly empty chunks. This resulted in a lot of allocated space (10''s of gigs), with only a few 100 meg used. i did `btrfs val start -dusage=75` to clean things up. then i ran `btrfs bal start -dconvert=raid5,soft`. I noticed how the difference between total and used for raid5 kept growing. My guess is that its taking 1 raid1 chunk (2x1 gig disk space, 1 gig data), and moving it to 1 raid5 chunk (4gig disk space, 3gig data), leaving all chunks 33% used. This is what 3 calls of `btrfs file df /` looks like a few minutes after each other, with the balance still running: Data, RAID1: total=807.00GB, used=805.70GB Data, RAID5: total=543.00GB, used=192.81GB System, RAID1: total=32.00MB, used=192.00KB Metadata, RAID1: total=6.00GB, used=3.54GB -- Data, RAID1: total=800.00GB, used=798.70GB Data, RAID5: total=564.00GB, used=199.30GB System, RAID1: total=32.00MB, used=192.00KB Metadata, RAID1: total=6.00GB, used=3.53GB -- Data, RAID1: total=795.00GB, used=793.70GB Data, RAID5: total=579.00GB, used=204.81GB System, RAID1: total=32.00MB, used=192.00KB Metadata, RAID1: total=6.00GB, used=3.54GB Remco-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Mar 11, 2013 at 09:15:44PM +0100, Remco Hosman wrote:> first, i did the following: `btrfs val start -dconvert=raid5,usage=1` to convert the mostly empty chunks. > This resulted in a lot of allocated space (10''s of gigs), with only a few 100 meg used.Matches my expectation, converting to the new profile needs to allocate full 1G chunks, but the usage=1 filter allows to fill them partially. After this step, several ~empty raid1 chunks should disappear.> i did `btrfs val start -dusage=75` to clean things up. > > then i ran `btrfs bal start -dconvert=raid5,soft`. > I noticed how the difference between total and used for raid5 kept growing.Do you remember if this was temporary or if the difference was unexpectedly big after the whole operation finished?> My guess is that its taking 1 raid1 chunk (2x1 gig disk space, 1 gig > data), and moving it to 1 raid5 chunk (4gig disk space, 3gig data), > leaving all chunks 33% used.Why 3G of data in raid5 case? I assume you talk about the actually used data and this should be the same as in raid1 case, but spread over 3x 1GB chunks and leaving them 33% utilized, that makes sense, but is not clear from your description.> This is what 3 calls of `btrfs file df /` looks like a few minutes after each other, with the balance still running:> Data, RAID1: total=807.00GB, used=805.70GB > Data, RAID5: total=543.00GB, used=192.81GB > -- > Data, RAID1: total=800.00GB, used=798.70GB > Data, RAID5: total=564.00GB, used=199.30GB > -- > Data, RAID1: total=795.00GB, used=793.70GB > Data, RAID5: total=579.00GB, used=204.81GBraid1 numbers going down, raid5 going up, all ok. david -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Op 15-3-2013 13:47, David Sterba schreef:> On Mon, Mar 11, 2013 at 09:15:44PM +0100, Remco Hosman wrote: >> first, i did the following: `btrfs val start -dconvert=raid5,usage=1` to convert the mostly empty chunks. >> This resulted in a lot of allocated space (10''s of gigs), with only a few 100 meg used. > Matches my expectation, converting to the new profile needs to allocate > full 1G chunks, but the usage=1 filter allows to fill them partially. > > After this step, several ~empty raid1 chunks should disappear.It did not only happen when i added the usage=1, but also without.>> i did `btrfs val start -dusage=75` to clean things up. >> >> then i ran `btrfs bal start -dconvert=raid5,soft`. >> I noticed how the difference between total and used for raid5 kept growing. > Do you remember if this was temporary or if the difference was > unexpectedly big after the whole operation finished?It did not finish, the filesystem did not have that much space free so i canceled it (even before it ran out of space) and ran `btrfs bal start -dusage=1` to cleanup the unused space>> My guess is that its taking 1 raid1 chunk (2x1 gig disk space, 1 gig >> data), and moving it to 1 raid5 chunk (4gig disk space, 3gig data), >> leaving all chunks 33% used. > Why 3G of data in raid5 case? I assume you talk about the actually used > data and this should be the same as in raid1 case, but spread over 3x > 1GB chunks and leaving them 33% utilized, that makes sense, but is not > clear from your description.I assumed that with raid5, btrfs allocated 1 gig on each disk and uses 1 for parity, giving 3 gig of data in 4gig diskspace.>> This is what 3 calls of `btrfs file df /` looks like a few minutes after each other, with the balance still running: >> Data, RAID1: total=807.00GB, used=805.70GB >> Data, RAID5: total=543.00GB, used=192.81GB >> -- >> Data, RAID1: total=800.00GB, used=798.70GB >> Data, RAID5: total=564.00GB, used=199.30GB >> -- >> Data, RAID1: total=795.00GB, used=793.70GB >> Data, RAID5: total=579.00GB, used=204.81GB > raid1 numbers going down, raid5 going up, all ok. > > david-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html