Sandon Van Ness
2010-May-30 21:37 UTC
[zfs-discuss] Disk space overhead (total volume size) by ZFS
I just wanted to make sure this is normal and is expected. I fully expected that as the file-system filled up I would see more disk space being used than with other file-systems due to its features but what I didn''t expect was to lose out on ~500-600GB to be missing from the total volume size right at file-system creation. Comparing two systems, one being JFS and one being ZFS, one being raidz2 one being raid6. Here is the differences I see: ZFS: root at opensolaris: 11:22 AM :/data# df -k /data Filesystem kbytes used avail capacity Mounted on data 17024716800 258872352 16765843815 2% /data JFS: root at sabayonx86-64: 11:22 AM :~# df -k /data2 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdd1 17577451416 2147912 17575303504 1% /data2 zpool list shows the raw capacity right? root at opensolaris: 11:25 AM :/data# zpool list data NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT data 18.1T 278G 17.9T 1% 1.00x ONLINE - Ok, i would expect it to be rounded to 18.2 but that seems about right for 20 trillion bytes (what 20x1 TB is): root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print 20000000000000/1024/1024/1024/1024}'' 18.1899 Now minus two drives for parity: root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print 18000000000000/1024/1024/1024/1024}'' 16.3709 Yet when running zfs list it also lists the amount of storage significantly smaller: root at opensolaris: 11:23 AM :~# zfs list data NAME USED AVAIL REFER MOUNTPOINT data 164K 15.9T 56.0K /data I would expect this to be 16.4T. Taking the df -k values JFS gives me a total volume size of: root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print 17577451416/1024/1024/1024}'' 16.3703 and zfs is: root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print 17024716800/1024/1024/1024}'' 15.8555 So basically with JFS I see no decrease in total volume size but a huge difference on ZFS. Is this normal/expected? Can anything be disabled to not lose 500-600 GB of space?
Brandon High
2010-May-30 21:51 UTC
[zfs-discuss] Disk space overhead (total volume size) by ZFS
On Sun, May 30, 2010 at 2:37 PM, Sandon Van Ness <sandon at van-ness.com> wrote:> ZFS: > root at opensolaris: 11:22 AM :/data# df -k /data''zfs list'' is more accurate than df, since it will also show space used by snapshots. eg: bhigh at basestar:~$ df -h /export/home/bhigh Filesystem size used avail capacity Mounted on tank/export/home/bhigh 5.3T 8.2G 2.8T 1% /export/home/bhigh bhigh at basestar:~$ zfs list tank/export/home/bhigh NAME USED AVAIL REFER MOUNTPOINT tank/export/home/bhigh 51.0G 2.85T 8.16G /export/home/bhigh> zpool list shows the raw capacity right?Yes. It shows the raw capacity, including space that will be used for parity. Its USED column includes space used by all active datasets and snapshots.> So basically with JFS I see no decrease in total volume size but a huge > difference on ZFS. Is this normal/expected? Can anything be disabled to > not lose 500-600 GB of space?Are you using any snapshots? They''ll consume space. What is the recordsize, and what kind of data are you storing? Small blocks or lots of small files (< 128k) will have more overhead for metadata. -B -- Brandon High : bhigh at freaks.com
Sandon Van Ness
2010-May-30 21:54 UTC
[zfs-discuss] Disk space overhead (total volume size) by ZFS
On 05/30/2010 02:51 PM, Brandon High wrote:> On Sun, May 30, 2010 at 2:37 PM, Sandon Van Ness <sandon at van-ness.com> wrote: > >> ZFS: >> root at opensolaris: 11:22 AM :/data# df -k /data >> > ''zfs list'' is more accurate than df, since it will also show space > used by snapshots. eg: > bhigh at basestar:~$ df -h /export/home/bhigh > Filesystem size used avail capacity Mounted on > tank/export/home/bhigh > 5.3T 8.2G 2.8T 1% /export/home/bhigh > bhigh at basestar:~$ zfs list tank/export/home/bhigh > NAME USED AVAIL REFER MOUNTPOINT > tank/export/home/bhigh 51.0G 2.85T 8.16G /export/home/bhigh > > >> zpool list shows the raw capacity right? >> > Yes. It shows the raw capacity, including space that will be used for > parity. Its USED column includes space used by all active datasets and > snapshots. > > >> So basically with JFS I see no decrease in total volume size but a huge >> difference on ZFS. Is this normal/expected? Can anything be disabled to >> not lose 500-600 GB of space? >> > Are you using any snapshots? They''ll consume space. > > What is the recordsize, and what kind of data are you storing? Small > blocks or lots of small files (< 128k) will have more overhead for > metadata. > > -B > >Yeah I know all about issues with snapshots and stuff like this but this is on a totally new/empty file-system. Its basically over 500 gigabytes smaller right from the get-go even before any data has ever been written to it. I would totally expect some numbers to be off on a used file-system but not so much on a completely brand-new one.
Mattias Pantzare
2010-May-30 22:10 UTC
[zfs-discuss] Disk space overhead (total volume size) by ZFS
On Sun, May 30, 2010 at 23:37, Sandon Van Ness <sandon at van-ness.com> wrote:> I just wanted to make sure this is normal and is expected. I fully > expected that as the file-system filled up I would see more disk space > being used than with other file-systems due to its features but what I > didn''t expect was to lose out on ~500-600GB to be missing from the total > volume size right at file-system creation. > > Comparing two systems, one being JFS and one being ZFS, one being raidz2 > one being raid6. Here is the differences I see: > > ZFS: > root at opensolaris: 11:22 AM :/data# df -k /data > Filesystem ? ? ? ? ? ?kbytes ? ?used ? avail capacity ?Mounted on > data ? ? ? ? ? ? ? ? 17024716800 258872352 16765843815 ? ? 2% ? ?/data > > JFS: > root at sabayonx86-64: 11:22 AM :~# df -k /data2 > Filesystem ? ? ? ? ? 1K-blocks ? ? ?Used Available Use% Mounted on > /dev/sdd1 ? ? ? ? ? ?17577451416 ? 2147912 17575303504 ? 1% /data2 > > zpool list shows the raw capacity right? > > root at opensolaris: 11:25 AM :/data# zpool list data > NAME ? ?SIZE ?ALLOC ? FREE ? ?CAP ?DEDUP ?HEALTH ?ALTROOT > data ? 18.1T ? 278G ?17.9T ? ? 1% ?1.00x ?ONLINE ?- > > Ok, i would expect it to be rounded to 18.2 but that seems about right > for 20 trillion bytes (what 20x1 TB is): > > root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print > 20000000000000/1024/1024/1024/1024}'' > 18.1899 > > Now minus two drives for parity: > > root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print > 18000000000000/1024/1024/1024/1024}'' > 16.3709 > > Yet when running zfs list it also lists the amount of storage > significantly smaller: > > root at opensolaris: 11:23 AM :~# zfs list data > NAME ? USED ?AVAIL ?REFER ?MOUNTPOINT > data ? 164K ?15.9T ?56.0K ?/data > > I would expect this to be 16.4T. > > Taking the df -k values JFS gives me a total volume size of: > > root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print > 17577451416/1024/1024/1024}'' > 16.3703 > > and zfs is: > > root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print > 17024716800/1024/1024/1024}'' > 15.8555 > > So basically with JFS I see no decrease in total volume size but a huge > difference on ZFS. Is this normal/expected? Can anything be disabled to > not lose 500-600 GB of space?This may be the answer: http://www.cuddletech.com/blog/pivot/entry.php?id=1013
Sandon Van Ness
2010-May-30 22:18 UTC
[zfs-discuss] Disk space overhead (total volume size) by ZFS
On 05/30/2010 03:10 PM, Mattias Pantzare wrote:> > On Sun, May 30, 2010 at 23:37, Sandon Van Ness <sandon at van-ness.com> wrote: > > > >> >> I just wanted to make sure this is normal and is expected. I fully >> >> expected that as the file-system filled up I would see more disk space >> >> being used than with other file-systems due to its features but what I >> >> didn''t expect was to lose out on ~500-600GB to be missing from the total >> >> volume size right at file-system creation. >> >> >> >> Comparing two systems, one being JFS and one being ZFS, one being raidz2 >> >> one being raid6. Here is the differences I see: >> >> >> >> ZFS: >> >> root at opensolaris: 11:22 AM :/data# df -k /data >> >> Filesystem kbytes used avail capacity Mounted on >> >> data 17024716800 258872352 16765843815 2% /data >> >> >> >> JFS: >> >> root at sabayonx86-64: 11:22 AM :~# df -k /data2 >> >> Filesystem 1K-blocks Used Available Use% Mounted on >> >> /dev/sdd1 17577451416 2147912 17575303504 1% /data2 >> >> >> >> zpool list shows the raw capacity right? >> >> >> >> root at opensolaris: 11:25 AM :/data# zpool list data >> >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> >> data 18.1T 278G 17.9T 1% 1.00x ONLINE - >> >> >> >> Ok, i would expect it to be rounded to 18.2 but that seems about right >> >> for 20 trillion bytes (what 20x1 TB is): >> >> >> >> root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print >> >> 20000000000000/1024/1024/1024/1024}'' >> >> 18.1899 >> >> >> >> Now minus two drives for parity: >> >> >> >> root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print >> >> 18000000000000/1024/1024/1024/1024}'' >> >> 16.3709 >> >> >> >> Yet when running zfs list it also lists the amount of storage >> >> significantly smaller: >> >> >> >> root at opensolaris: 11:23 AM :~# zfs list data >> >> NAME USED AVAIL REFER MOUNTPOINT >> >> data 164K 15.9T 56.0K /data >> >> >> >> I would expect this to be 16.4T. >> >> >> >> Taking the df -k values JFS gives me a total volume size of: >> >> >> >> root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print >> >> 17577451416/1024/1024/1024}'' >> >> 16.3703 >> >> >> >> and zfs is: >> >> >> >> root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print >> >> 17024716800/1024/1024/1024}'' >> >> 15.8555 >> >> >> >> So basically with JFS I see no decrease in total volume size but a huge >> >> difference on ZFS. Is this normal/expected? Can anything be disabled to >> >> not lose 500-600 GB of space? >> >> >> > > This may be the answer: > > http://www.cuddletech.com/blog/pivot/entry.php?id=1013 > > >That is definitely interesting; however, I am seeing more than 1.6% of a descrepancy: When using a newer df based off gnu coreutils I use -B to specify the unit of 1 billion bytes which is 1 GB using the HD companies scale. On the raid/jfs: root at sabayonx86-64: 03:14 PM :~# df -B 1000000000 /data2 Filesystem 1GB-blocks Used Available Use% Mounted on /dev/sdd1 18000 3 17998 1% /data2 on the ZFS root at opensolaris: 03:16 PM :/data# df -B 1000000000 /data Filesystem 1GB-blocks Used Available Use% Mounted on data 17434 1 17434 1% /data Interesting enough I am seeing almost exactly double that as its 3.14% by my calculations. Maybe this was cahnged in newer versions to have more of a reserve? I am running b134. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100530/4f0b5a1d/attachment.html>
Roch Bourbonnais
2010-May-31 16:03 UTC
[zfs-discuss] Disk space overhead (total volume size) by ZFS
Can you post zpool status ? Are your drives all the same size ? -r Le 30 mai 2010 ? 23:37, Sandon Van Ness a ?crit :> I just wanted to make sure this is normal and is expected. I fully > expected that as the file-system filled up I would see more disk space > being used than with other file-systems due to its features but what I > didn''t expect was to lose out on ~500-600GB to be missing from the total > volume size right at file-system creation. > > Comparing two systems, one being JFS and one being ZFS, one being raidz2 > one being raid6. Here is the differences I see: > > ZFS: > root at opensolaris: 11:22 AM :/data# df -k /data > Filesystem kbytes used avail capacity Mounted on > data 17024716800 258872352 16765843815 2% /data > > JFS: > root at sabayonx86-64: 11:22 AM :~# df -k /data2 > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/sdd1 17577451416 2147912 17575303504 1% /data2 > > zpool list shows the raw capacity right? > > root at opensolaris: 11:25 AM :/data# zpool list data > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > data 18.1T 278G 17.9T 1% 1.00x ONLINE - > > Ok, i would expect it to be rounded to 18.2 but that seems about right > for 20 trillion bytes (what 20x1 TB is): > > root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print > 20000000000000/1024/1024/1024/1024}'' > 18.1899 > > Now minus two drives for parity: > > root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print > 18000000000000/1024/1024/1024/1024}'' > 16.3709 > > Yet when running zfs list it also lists the amount of storage > significantly smaller: > > root at opensolaris: 11:23 AM :~# zfs list data > NAME USED AVAIL REFER MOUNTPOINT > data 164K 15.9T 56.0K /data > > I would expect this to be 16.4T. > > Taking the df -k values JFS gives me a total volume size of: > > root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print > 17577451416/1024/1024/1024}'' > 16.3703 > > and zfs is: > > root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print > 17024716800/1024/1024/1024}'' > 15.8555 > > So basically with JFS I see no decrease in total volume size but a huge > difference on ZFS. Is this normal/expected? Can anything be disabled to > not lose 500-600 GB of space? > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Sandon Van Ness
2010-May-31 19:16 UTC
[zfs-discuss] Disk space overhead (total volume size) by ZFS
On 05/31/2010 09:03 AM, Roch Bourbonnais wrote:> > Can you post zpool status ? > > Are your drives all the same size ? > > -r > > > > >Here is zpool status for my ''data'' pool: root at opensolaris: 11:43 AM :~# zpool status data pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c4t5000C500028BD5FCd0p0 ONLINE 0 0 0 c4t5000C50009A4D727d0p0 ONLINE 0 0 0 c4t5000C50009A46AF5d0p0 ONLINE 0 0 0 c4t5000C50009A515B0d0p0 ONLINE 0 0 0 c4t5000C500028A81BEd0p0 ONLINE 0 0 0 c4t5000C500028B44A1d0p0 ONLINE 0 0 0 c4t5000C500028B415Bd0p0 ONLINE 0 0 0 c4t5000C500028B23D2d0p0 ONLINE 0 0 0 c4t5000C5000CC3338Dd0p0 ONLINE 0 0 0 c4t5000C500027F59C8d0p0 ONLINE 0 0 0 c4t5000C50009DBF8D4d0p0 ONLINE 0 0 0 c4t5000C500027F3C1Fd0p0 ONLINE 0 0 0 c4t5000C5000DAF02F3d0p0 ONLINE 0 0 0 c4t5000C5000DA7ED4Ed0p0 ONLINE 0 0 0 c4t5000C5000DAEF990d0p0 ONLINE 0 0 0 c4t5000C5000DAEEF8Ed0p0 ONLINE 0 0 0 c4t5000C5000DAEB881d0p0 ONLINE 0 0 0 c4t5000C5000A121581d0p0 ONLINE 0 0 0 c4t5000C5000DAC848Fd0p0 ONLINE 0 0 0 c4t5000C50002770EE6d0p0 ONLINE 0 0 0 Yes, all the disks are the same size (1000.2 TB) root at opensolaris: 12:00 PM :~/parted-2.2# iostat -E cmdk0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Model: ST3400632A Revision: Serial No: 4NF Size: 400.09GB <400085876736 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 sd1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD15 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd2 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD14 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd3 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD15 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd4 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD1A Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd5 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD04 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd6 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD04 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd7 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD15 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd8 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD1A Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd9 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD14 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd10 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD04 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd11 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD14 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd12 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD14 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd13 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD04 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd14 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD04 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd15 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD15 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd16 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD15 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd17 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD15 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd18 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD04 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd19 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD15 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 sd20 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST31000340AS Revision: SD14 Serial No: Size: 1000.20GB <1000204886016 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0> > Le 30 mai 2010 ? 23:37, Sandon Van Ness a ?crit : > > > > > >> >> I just wanted to make sure this is normal and is expected. I fully >> >> expected that as the file-system filled up I would see more disk space >> >> being used than with other file-systems due to its features but what I >> >> didn''t expect was to lose out on ~500-600GB to be missing from the total >> >> volume size right at file-system creation. >> >> >> >> Comparing two systems, one being JFS and one being ZFS, one being raidz2 >> >> one being raid6. Here is the differences I see: >> >> >> >> ZFS: >> >> root at opensolaris: 11:22 AM :/data# df -k /data >> >> Filesystem kbytes used avail capacity Mounted on >> >> data 17024716800 258872352 16765843815 2% /data >> >> >> >> JFS: >> >> root at sabayonx86-64: 11:22 AM :~# df -k /data2 >> >> Filesystem 1K-blocks Used Available Use% Mounted on >> >> /dev/sdd1 17577451416 2147912 17575303504 1% /data2 >> >> >> >> zpool list shows the raw capacity right? >> >> >> >> root at opensolaris: 11:25 AM :/data# zpool list data >> >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> >> data 18.1T 278G 17.9T 1% 1.00x ONLINE - >> >> >> >> Ok, i would expect it to be rounded to 18.2 but that seems about right >> >> for 20 trillion bytes (what 20x1 TB is): >> >> >> >> root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print >> >> 20000000000000/1024/1024/1024/1024}'' >> >> 18.1899 >> >> >> >> Now minus two drives for parity: >> >> >> >> root at sabayonx86-64: 11:23 AM :~# echo | awk ''{print >> >> 18000000000000/1024/1024/1024/1024}'' >> >> 16.3709 >> >> >> >> Yet when running zfs list it also lists the amount of storage >> >> significantly smaller: >> >> >> >> root at opensolaris: 11:23 AM :~# zfs list data >> >> NAME USED AVAIL REFER MOUNTPOINT >> >> data 164K 15.9T 56.0K /data >> >> >> >> I would expect this to be 16.4T. >> >> >> >> Taking the df -k values JFS gives me a total volume size of: >> >> >> >> root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print >> >> 17577451416/1024/1024/1024}'' >> >> 16.3703 >> >> >> >> and zfs is: >> >> >> >> root at sabayonx86-64: 11:31 AM :~# echo | awk ''{print >> >> 17024716800/1024/1024/1024}'' >> >> 15.8555 >> >> >> >> So basically with JFS I see no decrease in total volume size but a huge >> >> difference on ZFS. Is this normal/expected? Can anything be disabled to >> >> not lose 500-600 GB of space? >> >> _______________________________________________ >> >> zfs-discuss mailing list >> >> zfs-discuss at opensolaris.org >> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> > > >
Ian Collins
2010-Jun-01 05:15 UTC
[zfs-discuss] Disk space overhead (total volume size) by ZFS
On 06/ 1/10 07:16 AM, Sandon Van Ness wrote:> Here is zpool status for my ''data'' pool: > > root at opensolaris: 11:43 AM :~# zpool status data > pool: data > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > c4t5000C500028BD5FCd0p0 ONLINE 0 0 0 > c4t5000C50009A4D727d0p0 ONLINE 0 0 0 > c4t5000C50009A46AF5d0p0 ONLINE 0 0 0 > c4t5000C50009A515B0d0p0 ONLINE 0 0 0 > c4t5000C500028A81BEd0p0 ONLINE 0 0 0 > c4t5000C500028B44A1d0p0 ONLINE 0 0 0 > c4t5000C500028B415Bd0p0 ONLINE 0 0 0 > c4t5000C500028B23D2d0p0 ONLINE 0 0 0 > c4t5000C5000CC3338Dd0p0 ONLINE 0 0 0 > c4t5000C500027F59C8d0p0 ONLINE 0 0 0 > c4t5000C50009DBF8D4d0p0 ONLINE 0 0 0 > c4t5000C500027F3C1Fd0p0 ONLINE 0 0 0 > c4t5000C5000DAF02F3d0p0 ONLINE 0 0 0 > c4t5000C5000DA7ED4Ed0p0 ONLINE 0 0 0 > c4t5000C5000DAEF990d0p0 ONLINE 0 0 0 > c4t5000C5000DAEEF8Ed0p0 ONLINE 0 0 0 > c4t5000C5000DAEB881d0p0 ONLINE 0 0 0 > c4t5000C5000A121581d0p0 ONLINE 0 0 0 > c4t5000C5000DAC848Fd0p0 ONLINE 0 0 0 > c4t5000C50002770EE6d0p0 ONLINE 0 0 0 > > Yes, all the disks are the same size (1000.2 TB) > >Is that the pool you are writing to in your other thread? If so, it''s no surprise you are getting poor write performance with a raidz2 that wide! -- Ian.