Greetings, until yesterday I was running a btrfs filesystem across two 2.0 TiB disks in RAID1 mode for both metadata and data without any problems. As space was getting short I wanted to extend the filesystem by two additional drives lying around, which both are 1.0 TiB in size. Knowing little about the btrfs RAID implementation I thought I had to switch to RAID10 mode, which I was told is currently not possible (and later found out that it is indeed). Then I read this [1] mailing list post basically saying that, in the special case of four disks, btrfs-raid1 behaves exactly like RAID10. So I added the two new disks to my existing filesystem $ btrfs device add /dev/sde1 /dev/sdf1 /mnt/archive and as the capacity reported by ''btrfs filesystem df'' did not increase, I started a balancing run: $ btrfs filesystem balance start /mnt/archive Waiting for the balancing run to finish (which will take much longer than I thought; still running) I found out that as of kernel 3.3 changing the RAID level (aka restriping) is now possible: [2]. I got two questions now: 1.) Is there really no difference between btrfs-raid1 and btrfs-raid10 in my case (2 x 2TiB, 2 x 1TiB disks)? Same degree of fault tolerance? 2.) Summing up the capacities reported by ''btrfs filesystem df'' I only get ~2.25 TiB for my filesystem, is that a realistic net size for 3 TiB gross? $ btrfs filesystem df /mnt/archive Data, RAID1: total=2.10TB, used=1.68TB Data: total=8.00MB, used=0.00 System, RAID1: total=40.00MB, used=324.00KB System: total=4.00MB, used=0.00 Metadata, RAID1: total=112.50GB, used=3.21GB Metadata: total=8.00MB, used=0.00 Thanks in advance for any advice! Regards, lynix [1] http://www.spinics.net/lists/linux-btrfs/msg15867.html [2] https://lkml.org/lkml/2012/1/17/381 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, May 06, 2012 at 04:48:48PM +0200, Alexander Koch wrote:> Greetings, > > until yesterday I was running a btrfs filesystem across two 2.0 TiB > disks in RAID1 mode for both metadata and data without any problems. > > As space was getting short I wanted to extend the filesystem by two > additional drives lying around, which both are 1.0 TiB in size. > > Knowing little about the btrfs RAID implementation I thought I had to > switch to RAID10 mode, which I was told is currently not possible (and > later found out that it is indeed). > Then I read this [1] mailing list post basically saying that, in the > special case of four disks, btrfs-raid1 behaves exactly like RAID10. > > So I added the two new disks to my existing filesystem > > $ btrfs device add /dev/sde1 /dev/sdf1 /mnt/archive > > and as the capacity reported by ''btrfs filesystem df'' did not increase,It won''t -- "btrfs fi df" reports what''s been allocated out of the raw pool. To check that the disks have been added, you need "btrfs fi show" (no parameters).> I started a balancing run: > > $ btrfs filesystem balance start /mnt/archive > > Waiting for the balancing run to finish (which will take much longer > than I thought; still running) I found out that as of kernel 3.3 > changing the RAID level (aka restriping) is now possible: [2].It is indeed.> I got two questions now: > > 1.) Is there really no difference between btrfs-raid1 and btrfs-raid10 > in my case (2 x 2TiB, 2 x 1TiB disks)? Same degree of fault > tolerance?There''s the same degree of fault tolerance -- you''re guaranteed to be able to lose one disk from the array and still have all your data. The data will be laid out in a different way on the disks, though. In your case, with four unevenly-sized disks, you will get the best usage out of the filesystem with RAID-1. With only 4 disks, RAID-10 will run out of space when the smallest disk is full. (So, in your configuration, you''d still have only 2TB of space usable, rather defeating the point of having the new disks in the first place).> 2.) Summing up the capacities reported by ''btrfs filesystem df'' I only > get ~2.25 TiB for my filesystem, is that a realistic net size for > 3 TiB gross?You''re not comparing the right numbers here. "btrfs fi show" shows the raw available unallocated space that the filesystem has to play with. "btrfs fi df" shows only what it''s allocated so far, and how much of the atllocation it has used -- in this case, because you''ve added new disks, there''s quite a bit of free space unallocated still, so the numbers below won''t add up to anything like 3TB.> $ btrfs filesystem df /mnt/archive > Data, RAID1: total=2.10TB, used=1.68TB > Data: total=8.00MB, used=0.00 > System, RAID1: total=40.00MB, used=324.00KB > System: total=4.00MB, used=0.00 > Metadata, RAID1: total=112.50GB, used=3.21GB > Metadata: total=8.00MB, used=0.00Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- My doctor tells me that I have a malformed public-duty gland, --- and a natural deficiency in moral fibre.
Thanks for clarifying things, Hugo :)> It won''t -- "btrfs fi df" reports what''s been allocated out of the > raw pool. To check that the disks have been added, you need "btrfs fi > show" (no parameters).Okay, that gives me Label: ''archive'' uuid: 3818eedb-5379-4c40-9d3d-bd91f60d9094 Total devices 4 FS bytes used 1.68TB devid 4 size 931.51GB used 664.03GB path /dev/dm-10 devid 3 size 931.51GB used 664.03GB path /dev/dm-9 devid 2 size 1.82TB used 1.56TB path /dev/dm-8 devid 1 size 1.82TB used 1.56TB path /dev/dm-7 so I conclude all disks are successfully assigned to the raw pool for my ''archive'' volume.> You''re not comparing the right numbers here. "btrfs fi show" shows > the raw available unallocated space that the filesystem has to play > with. "btrfs fi df" shows only what it''s allocated so far, and how > much of the atllocation it has used -- in this case, because you''ve > added new disks, there''s quite a bit of free space unallocated still, > so the numbers below won''t add up to anything like 3TB.So how is the available space in the raw pool finally allocated to the usable area? Must I manually enlarge the filesystem by issuing a ''btrfs fi resize max /mountpoint'' (like assigning space of a VG to a logical volume in LVM) or is the space allocated automatically when the filesystem gets filled with data? Regards, Alex -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, May 6, 2012 at 9:23 AM, Hugo Mills <hugo@carfax.org.uk> wrote:> On Sun, May 06, 2012 at 04:48:48PM +0200, Alexander Koch wrote: >> So I added the two new disks to my existing filesystem >> >> $ btrfs device add /dev/sde1 /dev/sdf1 /mnt/archive >> >> and as the capacity reported by ''btrfs filesystem df'' did not increase, > > It won''t -- "btrfs fi df" reports what''s been allocated out of the > raw pool. To check that the disks have been added, you need "btrfs fi > show" (no parameters).Worth pointing out that plain old /bin/df will report the added space (I believe), but without taking into account raid level for space that isn''t allocated to a block group yet.>> I got two questions now: >> >> 1.) Is there really no difference between btrfs-raid1 and btrfs-raid10 >> in my case (2 x 2TiB, 2 x 1TiB disks)? Same degree of fault >> tolerance? > > There''s the same degree of fault tolerance -- you''re guaranteed to > be able to lose one disk from the array and still have all your data. > > The data will be laid out in a different way on the disks, though. > In your case, with four unevenly-sized disks, you will get the best > usage out of the filesystem with RAID-1. With only 4 disks, RAID-10 > will run out of space when the smallest disk is full. (So, in your > configuration, you''d still have only 2TB of space usable, rather > defeating the point of having the new disks in the first place).Well, compared to the 1TB he had before, but yes, still short of the 3TB of capacity he''d have with RAID-1. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, May 06, 2012 at 09:49:36PM +0200, Alexander Koch wrote:> Thanks for clarifying things, Hugo :) > > > It won''t -- "btrfs fi df" reports what''s been allocated out of the > > raw pool. To check that the disks have been added, you need "btrfs fi > > show" (no parameters). > > Okay, that gives me > > Label: ''archive'' uuid: 3818eedb-5379-4c40-9d3d-bd91f60d9094 > Total devices 4 FS bytes used 1.68TB > devid 4 size 931.51GB used 664.03GB path /dev/dm-10 > devid 3 size 931.51GB used 664.03GB path /dev/dm-9 > devid 2 size 1.82TB used 1.56TB path /dev/dm-8 > devid 1 size 1.82TB used 1.56TB path /dev/dm-7 > > so I conclude all disks are successfully assigned to the raw pool for my > ''archive'' volume.Yes, that all looks good.> > You''re not comparing the right numbers here. "btrfs fi show" shows > > the raw available unallocated space that the filesystem has to play > > with. "btrfs fi df" shows only what it''s allocated so far, and how > > much of the atllocation it has used -- in this case, because you''ve > > added new disks, there''s quite a bit of free space unallocated still, > > so the numbers below won''t add up to anything like 3TB. > > So how is the available space in the raw pool finally allocated to the > usable area? Must I manually enlarge the filesystem by issuing a > ''btrfs fi resize max /mountpoint'' (like assigning space of a VG to a > logical volume in LVM) or is the space allocated automatically when the > filesystem gets filled with data?The space is automatically allocated as it''s needed. Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- Hey, Virtual Memory! Now I can have a *really big* ramdisk! ---
Seemingly Similar Threads
- Converting 1-drive ext4 to 4-drive raid10 btrfs
- Is it possible to reclaim block groups once they are allocated to data or metadata?
- The state of btrfs RAID6 as of kernel 3.13-rc1
- Some questions after devices addition to existing raid 1 btrfs filesystem
- [RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile