0bo0
2010-Jan-24 05:31 UTC
RAID-10 arrays built with btrfs & md report 2x difference in available size?
I created a btrfs RAID-10 array across 4-drives, mkfs.btrfs -L TEST -m raid10 -d raid10 /dev/sda /dev/sdb /dev/sdc /dev/sdd btrfs-show Label: TEST uuid: 2ac85206-2d88-47d7-a1e7-a93d80b199f8 Total devices 4 FS bytes used 28.00KB devid 1 size 931.51GB used 2.03GB path /dev/sda devid 2 size 931.51GB used 2.01GB path /dev/sdb devid 4 size 931.51GB used 2.01GB path /dev/sdd devid 3 size 931.51GB used 2.01GB path /dev/sdc @ mount, mount /dev/sda /mnt df -H | grep /dev/sda /dev/sda 4.1T 29k 4.1T 1% /mnt for RAID-10 across 4-drives, shouldn''t the reported/available size be 1/2x4TB ~ 2TB? e.g., using mdadm to build a RAID-10 array across the same drives, mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sd[abcd]1 pvcreate /dev/md0 pvs PV VG Fmt Attr PSize PFree /dev/md0 lvm2 -- 1.82T 1.82T is the difference in available array space real, an artifact, or a misunderstanding on my part? thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RK
2010-Jan-24 12:01 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
.. I have the same puzzlement? 0bo0 wrote:> I created a btrfs RAID-10 array across 4-drives, > > mkfs.btrfs -L TEST -m raid10 -d raid10 /dev/sda /dev/sdb /dev/sdc /dev/sdd > btrfs-show > Label: TEST uuid: 2ac85206-2d88-47d7-a1e7-a93d80b199f8 > Total devices 4 FS bytes used 28.00KB > devid 1 size 931.51GB used 2.03GB path /dev/sda > devid 2 size 931.51GB used 2.01GB path /dev/sdb > devid 4 size 931.51GB used 2.01GB path /dev/sdd > devid 3 size 931.51GB used 2.01GB path /dev/sdc > > @ mount, > > mount /dev/sda /mnt > df -H | grep /dev/sda > /dev/sda 4.1T 29k 4.1T 1% /mnt > > for RAID-10 across 4-drives, shouldn''t the reported/available size be > 1/2x4TB ~ 2TB? > > e.g., using mdadm to build a RAID-10 array across the same drives, > > mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sd[abcd]1 > pvcreate /dev/md0 > pvs > PV VG Fmt Attr PSize PFree > /dev/md0 lvm2 -- 1.82T 1.82T > > is the difference in available array space real, an artifact, or a > misunderstanding on my part? > > thanks. > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
0bo0
2010-Jan-24 17:18 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
noticing from above>> ... size 931.51GB used 2.03GB ...''used'' more than the ''size''? more confused ... -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Thomas Kupper
2010-Jan-29 21:57 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
> noticing from above > > >> ... size 931.51GB used 2.03GB ... > > ''used'' more than the ''size''? > > more confused ...For me, it looks as if 2.03GB is way smaller than 931.51GB (2 << 931), no? Everything seems to be fine here. And regarding your original mail: it seems that df is still lying about the size of the btrfs fs, check http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg00758.html-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
0bo0
2010-Jan-29 22:13 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
> For me, it looks as if 2.03GB is way smaller than 931.51GB (2 << 931), no? Everything seems to be fine here.gagh! i "saw" TB, not GB. 8-/> And regarding your original mail: it seems that df is still lying about the size of the btrfs fs, check http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg00758.htmlit is, and reading -> "df is lying. The total bytes in the FS include all 4 drives. I need to fix up the math for the total available space.", it looks like its under control. thx! -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RK
2010-Jan-29 22:38 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
> it is, and reading -> "df is lying. The total bytes in the FS include all 4 drives. I need to fix up the math for the total available > space.", it looks like its under control. thx!I think so too -- I have six 1TB drives on RAID-10 btrfs and it shows that I have 5.5TB free space .. how that can be ? # df -h Filesystem Size Used Avail Use% Mounted on /dev/sde1 66G 3.8G 59G 7% / /dev/sda 5.5T 28K 5.5T 1% /mnt/btrfs -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
jim owens
2010-Jan-29 23:46 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
RK wrote:> I think so too -- I have six 1TB drives on RAID-10 btrfs and it shows > that I have 5.5TB free space .. how that can be ? > > # df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sde1 66G 3.8G 59G 7% / > /dev/sda 5.5T 28K 5.5T 1% /mnt/btrfsAs has been discussed multiple times on the list, btrfs reports RAW storage so 6 x 1TB is 6 TB. And the use rate will be double for each block written (i.e. 2 blocks used) for raid10 (or raid1). And yes, it is "not what you expect", but it is the only method that can remain accurate under the mixed raid modes possible on a per-file-basis in btrfs. jim -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
0bo0
2010-Jan-29 23:53 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
On Fri, Jan 29, 2010 at 3:46 PM, jim owens <jowens@hp.com> wrote:> but it is the only method > that can remain accurate under the mixed raid modes possible > on a per-file-basis in btrfs.can you clarify, then, the intention/goal behind cmason''s "df is lying. The total bytes in the FS include all 4 drives. I need to fix up the math for the total available space." Is the goal NOT to accurately represent the actual available space? Seems rather odd that users are simply to know/accept that "available space" in btrfs RAID-10 != "available space" in md RIAD-10 ... -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Goffredo Baroncelli
2010-Jan-30 13:24 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
On Saturday 30 January 2010, 0bo0 wrote:> Is the goal NOT to accurately represent the actual available space? > Seems rather odd that users are simply to know/accept that "available > space" in btrfs RAID-10 != "available space" in md RIAD-10 ...As reported more time in this ML, btrfs is able to store the data in striping/raid1 mode per-file-basis. The space on the disk is grouped in chunk. The raid mode is set per-chunk- basis [1]. So a file stored in a chunk may be written two times (in one or two different disk), and another file stored in another chunk may be written with a different policy. In fact the btrfs store the data in "raid0" mode, and the metadata in raid1 mode, even with only one disk. Even tough the words "raid1/0" are incorrect with only one disk. So key points are: - it is incorrect to say that the btrfs filesystem is configured in raidX mode - it is correct that the file xyz is stored in raidX mode - is quite simple to evaluate the space available. It is more complex to evaluate before the file creation how many of the space available a file of a certain size consumes. - unfortunately, today are not available tools that permits to manage the raid mode of a file BR G.Baroncelli -- gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) <kreijack inwind it> Key fingerprint = 4769 7E51 5293 D36C 814E C054 BF04 F161 3DC5 0512 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Goffredo Baroncelli
2010-Jan-30 13:29 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
On Saturday 30 January 2010, Goffredo Baroncelli wrote:> On Saturday 30 January 2010, 0bo0 wrote: > > > Is the goal NOT to accurately represent the actual available space? > > Seems rather odd that users are simply to know/accept that "available > > space" in btrfs RAID-10 != "available space" in md RIAD-10 ... > > As reported more time in this ML, btrfs is able to store the data in > striping/raid1 mode per-file-basis. > > The space on the disk is grouped in chunk. The raid mode is set per-chunk- > basis [1]. So a file stored in a chunk may be written two times (in one ortwo> different disk), and another file stored in another chunk may be writtenwith> a different policy.Sorry, I forgot the reference: [1] http://btrfs.wiki.kernel.org/index.php/Multiple_Device_Support> > In fact the btrfs store the data in "raid0" mode, and the metadata in raid1 > mode, even with only one disk. Even tough the words "raid1/0" are incorrect > with only one disk. > > So key points are: > - it is incorrect to say that the btrfs filesystem is configured in raidXmode> - it is correct that the file xyz is stored in raidX mode > - is quite simple to evaluate the space available. It is more complex to > evaluate before the file creation how many of the space available a file ofa> certain size consumes. > - unfortunately, today are not available tools that permits to manage theraid> mode of a file > > BR > G.Baroncelli > > > > -- > gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) <kreijack inwindit>> Key fingerprint = 4769 7E51 5293 D36C 814E C054 BF04 F161 3DC5 0512 > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >-- gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) <kreijack@inwind.it> Key fingerprint = 4769 7E51 5293 D36C 814E C054 BF04 F161 3DC5 0512 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
jim owens
2010-Jan-30 15:36 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
0bo0 wrote:> On Fri, Jan 29, 2010 at 3:46 PM, jim owens <jowens@hp.com> wrote: >> but it is the only method >> that can remain accurate under the mixed raid modes possible >> on a per-file-basis in btrfs. > > can you clarify, then, the intention/goal behind cmason''s > > "df is lying. The total bytes in the FS include all 4 drives. I need to > fix up the math for the total available space."Well I don''t have the message where Chris said that, but I know he did not mean that "df" will be changed to report like an md raid.> Is the goal NOT to accurately represent the actual available space?Yes, but in btrfs "accurate" is RAW byte count, however...> Seems rather odd that users are simply to know/accept that "available > space" in btrfs RAID-10 != "available space" in md RIAD-10 ...Developers are aware that users want a method to get space values that reflect the raid state(s) of their filesystem. So Josef Bacik has sent patches to btrfs and btrfs-progs that allow you to see raid-mode data and metadata adjusted values with btrfs-ctrl -i instead of using "df". These patches have not been merged yet so you will have to pull them and apply yourself. But there remains the fact that the command "df" is not accurate and will never be accurate for many other filesystems. It is just that the user perception of error is much larger with some btrfs raid modes. And at the end of the day, you can not say md value == fs value is a requirement. jim -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
0bo0
2010-Feb-08 03:52 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
On Sat, Jan 30, 2010 at 7:36 AM, jim owens <jowens@hp.com> wrote:> So Josef Bacik has sent patches to btrfs and btrfs-progs that > allow you to see raid-mode data and metadata adjusted values > with btrfs-ctrl -i instead of using "df". > > These patches have not been merged yet so you will have to pull > them and apply yourself.-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
0bo0
2010-Feb-08 03:54 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
On Sat, Jan 30, 2010 at 7:36 AM, jim owens <jowens@hp.com> wrote:> So Josef Bacik has sent patches to btrfs and btrfs-progs that > allow you to see raid-mode data and metadata adjusted values > with btrfs-ctrl -i instead of using "df". > > These patches have not been merged yet so you will have to pull > them and apply yourself.Where exactly can these be pulled from? Is there a separate git tree? I just built from the btrfs & btrfs-progs heads, and still do not see these add''l features. Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
jim owens
2010-Feb-08 14:33 UTC
Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
0bo0 wrote:> On Sat, Jan 30, 2010 at 7:36 AM, jim owens <jowens@hp.com> wrote: >> So Josef Bacik has sent patches to btrfs and btrfs-progs that >> allow you to see raid-mode data and metadata adjusted values >> with btrfs-ctrl -i instead of using "df". >> >> These patches have not been merged yet so you will have to pull >> them and apply yourself. > > Where exactly can these be pulled from? Is there a separate git tree? > I just built from the btrfs & btrfs-progs heads, and still do not see > these add''l features.Chris does not merge patches into the tree until they are pushed to Linus. Sometimes he creates "experimental" branches with code for testing but I don''t think he has done that recently. You can find proposed unmerged patches at: http://patchwork.kernel.org/project/linux-btrfs/list/ jim -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html