Holger Isenberg
2010-Mar-04 13:52 UTC
[zfs-discuss] Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar sized disk images, this time copied via NFS as whole files. On this ZFS du and zfs report exactly the same usage of 3.7TByte. bash-3.00# zfs list -r zpool1/vmwarersync NAME USED AVAIL REFER MOUNTPOINT zpool1/vmwarersync 6.39T 985G 6.39T /export/archiv/VMs/rsync bash-3.00# du -hs /export/archiv/VMs/rsync 2.0T /export/archiv/VMs/rsync bash-3.00# zfs list -r zpool1/vmwarevcb NAME USED AVAIL REFER MOUNTPOINT zpool1/vmwarevcb 3.75T 985G 3.75T /export/archiv/VMs/vcb bash-3.00# du -hs /export/archiv/VMs/vcb 3.7T /export/archiv/VMs/vcb bash-3.00# zpool upgrade This system is currently running ZFS pool version 10. bash-3.00# zpool status zpool1 pool: zpool1 state: ONLINE scrub: scrub completed after 14h2m with 0 errors on Thu Mar 4 10:22:47 2010 config: bash-3.00# zpool list zpool1 NAME SIZE USED AVAIL CAP HEALTH ALTROOT zpool1 20.8T 19.3T 1.53T 92% ONLINE - -- This message posted from opensolaris.org
Giovanni Tirloni
2010-Mar-04 14:06 UTC
[zfs-discuss] Huge difference in reporting disk usage via du and zfs list. Fragmentation?
On Thu, Mar 4, 2010 at 10:52 AM, Holger Isenberg <isenberg at e-spirit.com>wrote:> Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS > Version 10? > > What except zfs send/receive can be done to free the fragmented space? > > One ZFS was used for some month to store some large disk images (each > 50GByte large) which are copied there with rsync. This ZFS then reports > 6.39TByte usage with zfs list and only 2TByte usage with du. > > The other ZFS was used for similar sized disk images, this time copied via > NFS as whole files. On this ZFS du and zfs report exactly the same usage of > 3.7TByte. >Please check the ZFS FAQ: http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq There is a question regarding the difference between du, df and zfs list. -- Giovanni Tirloni sysdroid.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100304/55085996/attachment.html>
Holger Isenberg
2010-Mar-04 14:10 UTC
[zfs-discuss] Huge difference in reporting disk usage via du and zfs l
I already have looked into that, but there are no snapshots or small files on that filesystem. It is used only as target for rsync to store few very large files which are written or updated once a week. Also note the huge difference between the filesystem written by cp over NFS and the one with rsync. -- This message posted from opensolaris.org
Erik Trimble
2010-Mar-04 14:14 UTC
[zfs-discuss] Huge difference in reporting disk usage via du and zfs list. Fragmentation?
I''m betting you have snapshots of the "fragmented" filesystem you don''t know about. Fragmentation won''t reduce the amount of usable space in the pool. Also, unless you used the ''--in-place'' option for rsync, rsync won''t cause much fragmentation, as it copies the entire file during the rsync. do this: ''zfs list -r -t all zpool1/vmwaresync'' and see what output you get. -Erik Holger Isenberg wrote:> Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? > > What except zfs send/receive can be done to free the fragmented space? > > One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. > > The other ZFS was used for similar sized disk images, this time copied via NFS as whole files. On this ZFS du and zfs report exactly the same usage of 3.7TByte. > > bash-3.00# zfs list -r zpool1/vmwarersync > NAME USED AVAIL REFER MOUNTPOINT > zpool1/vmwarersync 6.39T 985G 6.39T /export/archiv/VMs/rsync > > bash-3.00# du -hs /export/archiv/VMs/rsync > 2.0T /export/archiv/VMs/rsync > > bash-3.00# zfs list -r zpool1/vmwarevcb > NAME USED AVAIL REFER MOUNTPOINT > zpool1/vmwarevcb 3.75T 985G 3.75T /export/archiv/VMs/vcb > > bash-3.00# du -hs /export/archiv/VMs/vcb > 3.7T /export/archiv/VMs/vcb > > bash-3.00# zpool upgrade > This system is currently running ZFS pool version 10. > > bash-3.00# zpool status zpool1 > pool: zpool1 > state: ONLINE > scrub: scrub completed after 14h2m with 0 errors on Thu Mar 4 10:22:47 2010 > config: > > bash-3.00# zpool list zpool1 > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > zpool1 20.8T 19.3T 1.53T 92% ONLINE - >-- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Erik Trimble
2010-Mar-04 14:20 UTC
[zfs-discuss] Huge difference in reporting disk usage via du and zfs l
Holger Isenberg wrote:> I already have looked into that, but there are no snapshots or small files on that filesystem. > It is used only as target for rsync to store few very large files which are written or updated once a week. > > Also note the huge difference between the filesystem written by cp over NFS and the one with rsync. >Can you do this (and post the output): zfs list -r -t all zpool1 and zfs get all zpool1/vmwarersync zfs get all zpool1/vmwarevcb -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Holger Isenberg
2010-Mar-04 14:35 UTC
[zfs-discuss] Huge difference in reporting disk usage via du and zfs l
There are no snapshots on those filesystems, that''s I''m wondering about. I''m using snapshots on another Solaris system on a different hardware not connected to this one. And the 3 snapshots on this systems are only rarely created and not within the two huge filesystems mentioned above. And sparse files wouldn''t claim usage space shown by "zfs list", would they? All output as requested. As "all" wasn''t accepted by this version of the zfs tool, I used each value separatly. Some customer and product names are replaced with "...". bash-3.00# zfs list -r -t filesystem zpool1 NAME USED AVAIL REFER MOUNTPOINT zpool1 15.9T 985G 29.8K /zpool1 zpool1/archiv 769G 985G 769G /export/archiv zpool1/cds 74.5G 985G 74.5G /export/cds zpool1/docs 419M 985G 419M /export/docs zpool1/fi... 133G 985G 30.2G /export/home/fs4test/fi... zpool1/fs4export 5.35G 985G 5.35G none zpool1/hdfs 359G 141G 359G /export/hdfs zpool1/home 690G 985G 572G /export/home zpool1/home/fs41hd1 118G 985G 64.4G /export/home/fs41hd/sp.../fi... zpool1/linuxbackup_electra 18.0G 82.0G 18.0G /export/linuxbackup/electra zpool1/macbackup 843G 985G 843G /export/macbackup zpool1/n... 2.13T 985G 489G /export/home/n... zpool1/n.../archiv 521G 985G 521G /export/home/n.../archiv zpool1/n.../backup 586G 985G 586G /export/home/n.../backup zpool1/pan_fsexport 278G 422G 278G /export/pan_fsexport zpool1/tmp 39.9G 985G 39.9G /export/tmp zpool1/vmarchiv 510G 985G 510G /export/vmarchiv zpool1/vmwarersync 6.39T 985G 6.39T /export/archiv/VMs/rsync zpool1/vmwarevcb 3.75T 985G 3.75T /export/archiv/VMs/vcb bash-3.00# zfs list -r -t snapshot zpool1 NAME USED AVAIL REFER MOUNTPOINT zpool1/fi... at jmeter 103G - 133G - zpool1/home/fs41hd1 at start 53.4G - 86.0G - zpool1/n... at backup 585G - 586G - zpool1/n... at backup42 45.8M - 489G - bash-3.00# zfs list -r -t volume zpool1 NAME USED AVAIL REFER MOUNTPOINT zpool1/swap 7.05G 1010G 7.05G - bash-3.00# zfs get all zpool1/vmwarersync NAME PROPERTY VALUE SOURCE zpool1/vmwarersync type filesystem - zpool1/vmwarersync creation Thu Feb 5 10:51 2009 - zpool1/vmwarersync used 6.39T - zpool1/vmwarersync available 985G - zpool1/vmwarersync referenced 6.39T - zpool1/vmwarersync compressratio 1.00x - zpool1/vmwarersync mounted yes - zpool1/vmwarersync quota none default zpool1/vmwarersync reservation none default zpool1/vmwarersync recordsize 128K default zpool1/vmwarersync mountpoint /export/archiv/VMs/rsync local zpool1/vmwarersync sharenfs root=esx? local zpool1/vmwarersync checksum on default zpool1/vmwarersync compression off default zpool1/vmwarersync atime on default zpool1/vmwarersync devices on default zpool1/vmwarersync exec on default zpool1/vmwarersync setuid on default zpool1/vmwarersync readonly off default zpool1/vmwarersync zoned off default zpool1/vmwarersync snapdir hidden default zpool1/vmwarersync aclmode groupmask default zpool1/vmwarersync aclinherit restricted default zpool1/vmwarersync canmount on default zpool1/vmwarersync shareiscsi off default zpool1/vmwarersync xattr on default zpool1/vmwarersync copies 1 default zpool1/vmwarersync version 1 - zpool1/vmwarersync utf8only off - zpool1/vmwarersync normalization none - zpool1/vmwarersync casesensitivity sensitive - zpool1/vmwarersync vscan off default zpool1/vmwarersync nbmand off default zpool1/vmwarersync sharesmb off default zpool1/vmwarersync refquota none default zpool1/vmwarersync refreservation none default bash-3.00# zfs get all zpool1/vmwarevcb NAME PROPERTY VALUE SOURCE zpool1/vmwarevcb type filesystem - zpool1/vmwarevcb creation Tue Feb 3 11:10 2009 - zpool1/vmwarevcb used 3.75T - zpool1/vmwarevcb available 985G - zpool1/vmwarevcb referenced 3.75T - zpool1/vmwarevcb compressratio 1.00x - zpool1/vmwarevcb mounted yes - zpool1/vmwarevcb quota none default zpool1/vmwarevcb reservation none default zpool1/vmwarevcb recordsize 128K default zpool1/vmwarevcb mountpoint /export/archiv/VMs/vcb local zpool1/vmwarevcb sharenfs root=esx? local zpool1/vmwarevcb checksum on default zpool1/vmwarevcb compression off default zpool1/vmwarevcb atime on default zpool1/vmwarevcb devices on default zpool1/vmwarevcb exec on default zpool1/vmwarevcb setuid on default zpool1/vmwarevcb readonly off default zpool1/vmwarevcb zoned off default zpool1/vmwarevcb snapdir hidden default zpool1/vmwarevcb aclmode groupmask default zpool1/vmwarevcb aclinherit restricted default zpool1/vmwarevcb canmount on default zpool1/vmwarevcb shareiscsi off default zpool1/vmwarevcb xattr on default zpool1/vmwarevcb copies 1 default zpool1/vmwarevcb version 1 - zpool1/vmwarevcb utf8only off - zpool1/vmwarevcb normalization none - zpool1/vmwarevcb casesensitivity sensitive - zpool1/vmwarevcb vscan off default zpool1/vmwarevcb nbmand off default zpool1/vmwarevcb sharesmb off default zpool1/vmwarevcb refquota none default zpool1/vmwarevcb refreservation none default -- This message posted from opensolaris.org
Erik Trimble
2010-Mar-04 14:56 UTC
[zfs-discuss] Huge difference in reporting disk usage via du and zfs l
That was very comprehensive, Holger. Thanks. Unfortunately, I don''t see anything that would explain the discrepancy. When you do the rsync to this machine, are you simply rsync''ing a fresh image file (that is, creating a new file that doesn''t exist, not updating an existing image)? -Erik Holger Isenberg wrote:> There are no snapshots on those filesystems, that''s I''m wondering about. I''m using snapshots on another Solaris system on a different hardware not connected to this one. And the 3 snapshots on this systems are only rarely created and not within the two huge filesystems mentioned above. > > And sparse files wouldn''t claim usage space shown by "zfs list", would they? > > All output as requested. As "all" wasn''t accepted by this version of the zfs tool, I used each value separatly. > Some customer and product names are replaced with "...". > > bash-3.00# zfs list -r -t filesystem zpool1 > NAME USED AVAIL REFER MOUNTPOINT > zpool1 15.9T 985G 29.8K /zpool1 > zpool1/archiv 769G 985G 769G /export/archiv > zpool1/cds 74.5G 985G 74.5G /export/cds > zpool1/docs 419M 985G 419M /export/docs > zpool1/fi... 133G 985G 30.2G /export/home/fs4test/fi... > zpool1/fs4export 5.35G 985G 5.35G none > zpool1/hdfs 359G 141G 359G /export/hdfs > zpool1/home 690G 985G 572G /export/home > zpool1/home/fs41hd1 118G 985G 64.4G /export/home/fs41hd/sp.../fi... > zpool1/linuxbackup_electra 18.0G 82.0G 18.0G /export/linuxbackup/electra > zpool1/macbackup 843G 985G 843G /export/macbackup > zpool1/n... 2.13T 985G 489G /export/home/n... > zpool1/n.../archiv 521G 985G 521G /export/home/n.../archiv > zpool1/n.../backup 586G 985G 586G /export/home/n.../backup > zpool1/pan_fsexport 278G 422G 278G /export/pan_fsexport > zpool1/tmp 39.9G 985G 39.9G /export/tmp > zpool1/vmarchiv 510G 985G 510G /export/vmarchiv > zpool1/vmwarersync 6.39T 985G 6.39T /export/archiv/VMs/rsync > zpool1/vmwarevcb 3.75T 985G 3.75T /export/archiv/VMs/vcb > > bash-3.00# zfs list -r -t snapshot zpool1 > NAME USED AVAIL REFER MOUNTPOINT > zpool1/fi... at jmeter 103G - 133G - > zpool1/home/fs41hd1 at start 53.4G - 86.0G - > zpool1/n... at backup 585G - 586G - > zpool1/n... at backup42 45.8M - 489G - > > bash-3.00# zfs list -r -t volume zpool1 > NAME USED AVAIL REFER MOUNTPOINT > zpool1/swap 7.05G 1010G 7.05G - > > bash-3.00# zfs get all zpool1/vmwarersync > NAME PROPERTY VALUE SOURCE > zpool1/vmwarersync type filesystem - > zpool1/vmwarersync creation Thu Feb 5 10:51 2009 - > zpool1/vmwarersync used 6.39T - > zpool1/vmwarersync available 985G - > zpool1/vmwarersync referenced 6.39T - > zpool1/vmwarersync compressratio 1.00x - > zpool1/vmwarersync mounted yes - > zpool1/vmwarersync quota none default > zpool1/vmwarersync reservation none default > zpool1/vmwarersync recordsize 128K default > zpool1/vmwarersync mountpoint /export/archiv/VMs/rsync local > zpool1/vmwarersync sharenfs root=esx? local > zpool1/vmwarersync checksum on default > zpool1/vmwarersync compression off default > zpool1/vmwarersync atime on default > zpool1/vmwarersync devices on default > zpool1/vmwarersync exec on default > zpool1/vmwarersync setuid on default > zpool1/vmwarersync readonly off default > zpool1/vmwarersync zoned off default > zpool1/vmwarersync snapdir hidden default > zpool1/vmwarersync aclmode groupmask default > zpool1/vmwarersync aclinherit restricted default > zpool1/vmwarersync canmount on default > zpool1/vmwarersync shareiscsi off default > zpool1/vmwarersync xattr on default > zpool1/vmwarersync copies 1 default > zpool1/vmwarersync version 1 - > zpool1/vmwarersync utf8only off - > zpool1/vmwarersync normalization none - > zpool1/vmwarersync casesensitivity sensitive - > zpool1/vmwarersync vscan off default > zpool1/vmwarersync nbmand off default > zpool1/vmwarersync sharesmb off default > zpool1/vmwarersync refquota none default > zpool1/vmwarersync refreservation none default > > bash-3.00# zfs get all zpool1/vmwarevcb > NAME PROPERTY VALUE SOURCE > zpool1/vmwarevcb type filesystem - > zpool1/vmwarevcb creation Tue Feb 3 11:10 2009 - > zpool1/vmwarevcb used 3.75T - > zpool1/vmwarevcb available 985G - > zpool1/vmwarevcb referenced 3.75T - > zpool1/vmwarevcb compressratio 1.00x - > zpool1/vmwarevcb mounted yes - > zpool1/vmwarevcb quota none default > zpool1/vmwarevcb reservation none default > zpool1/vmwarevcb recordsize 128K default > zpool1/vmwarevcb mountpoint /export/archiv/VMs/vcb local > zpool1/vmwarevcb sharenfs root=esx? local > zpool1/vmwarevcb checksum on default > zpool1/vmwarevcb compression off default > zpool1/vmwarevcb atime on default > zpool1/vmwarevcb devices on default > zpool1/vmwarevcb exec on default > zpool1/vmwarevcb setuid on default > zpool1/vmwarevcb readonly off default > zpool1/vmwarevcb zoned off default > zpool1/vmwarevcb snapdir hidden default > zpool1/vmwarevcb aclmode groupmask default > zpool1/vmwarevcb aclinherit restricted default > zpool1/vmwarevcb canmount on default > zpool1/vmwarevcb shareiscsi off default > zpool1/vmwarevcb xattr on default > zpool1/vmwarevcb copies 1 default > zpool1/vmwarevcb version 1 - > zpool1/vmwarevcb utf8only off - > zpool1/vmwarevcb normalization none - > zpool1/vmwarevcb casesensitivity sensitive - > zpool1/vmwarevcb vscan off default > zpool1/vmwarevcb nbmand off default > zpool1/vmwarevcb sharesmb off default > zpool1/vmwarevcb refquota none default > zpool1/vmwarevcb refreservation none default >-- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Holger Isenberg
2010-Mar-04 15:12 UTC
[zfs-discuss] Huge difference in reporting disk usage via du and zfs l
Thanks for the fast response! Rsync is used on modified old files and some of the large files are not modified at all. Complete new files are only created every few weeks. One example for a typical leaf directory: bash-3.00# ls -gh /export/archiv/VMs/rsync/esx2/486e1c33-e7780ff3-fea1-00e08146792c/HD-DB2 total 148448514 -rw-r--r-- 1 archiv 14G Feb 27 13:53 HD-DB2-000001-delta.vmdk -rw-r--r-- 1 archiv 247 Feb 20 14:44 HD-DB2-000001.vmdk -rw-r--r-- 1 archiv 30G Jan 29 2009 HD-DB2-flat.vmdk -rw-r--r-- 1 archiv 8.5K Feb 27 13:53 HD-DB2.nvram -rw-r--r-- 1 archiv 19K Jan 29 2009 HD-DB2-Snapshot14.vmsn -rw-r--r-- 1 archiv 399 Jan 29 2009 HD-DB2.vmdk -rw-r--r-- 1 archiv 895 Nov 28 20:10 HD-DB2.vmsd -rwxr-xr-x 1 archiv 2.2K Feb 20 14:42 HD-DB2.vmx -rw-r--r-- 1 archiv 261 Nov 30 12:41 HD-DB2.vmxf -rw-r--r-- 1 archiv 50G Feb 27 13:53 +Iw-HD-DB2-flat.vmdk -rw-r--r-- 1 archiv 404 Feb 20 14:44 +Iw-HD-DB2.vmdk -rw-r--r-- 1 archiv 31K Feb 20 13:41 vmware-0.log -rw-r--r-- 1 archiv 31K Feb 13 14:06 vmware-1.log -rw-r--r-- 1 archiv 31K Feb 6 13:06 vmware-2.log -rw-r--r-- 1 archiv 31K Jan 30 13:42 vmware-3.log -rw-r--r-- 1 archiv 31K Jan 23 13:06 vmware-4.log -rw-r--r-- 1 archiv 31K Oct 17 16:17 vmware-56.log -rw-r--r-- 1 archiv 31K Oct 24 13:45 vmware-57.log -rw-r--r-- 1 archiv 32K Nov 5 19:13 vmware-58.log -rw-r--r-- 1 archiv 137K Nov 9 17:34 vmware-59.log -rw-r--r-- 1 archiv 31K Jan 16 14:04 vmware-5.log -rw-r--r-- 1 archiv 31K Nov 14 12:56 vmware-60.log -rw-r--r-- 1 archiv 164K Dec 5 13:13 vmware-61.log -rw-r--r-- 1 archiv 31K Feb 27 13:53 vmware.log Total summary of files and directories on the two filesystems: bash-3.00# find /export/archiv/VMs/rsync |wc -l 1796 bash-3.00# find /export/archiv/VMs/rsync -type d |wc -l 102 bash-3.00# find /export/archiv/VMs/vcb |wc -l 3058 bash-3.00# find /export/archiv/VMs/vcb -type d |wc -l 88 -- This message posted from opensolaris.org
Isenberg, Holger
2011-Feb-22 11:34 UTC
[zfs-discuss] Huge difference in reporting disk usage via du and zfs l
Oracle support solved the issued for me. It''s the following bug, fixed by upgrading to Solaris 10 U9: http://bugs.opensolaris.org/view_bug.do?bug_id=6792701 "Removing large holey file does not free space" No further intervention was required to the filesystem, just upgraded to U9. -- Holger Isenberg e-Spirit AG> -----Original Message----- > From: Erik.Trimble at Sun.COM [mailto:Erik.Trimble at Sun.COM] > Sent: Thursday, March 04, 2010 3:56 PM > To: Isenberg, Holger > Cc: zfs-discuss at opensolaris.org > Subject: Re: [zfs-discuss] Huge difference in reporting disk > usage via du and zfs l > > That was very comprehensive, Holger. Thanks. > > Unfortunately, I don''t see anything that would explain the > discrepancy. > > When you do the rsync to this machine, are you simply > rsync''ing a fresh > image file (that is, creating a new file that doesn''t exist, not > updating an existing image)? > > -Erik > > > > Holger Isenberg wrote: > > There are no snapshots on those filesystems, that''s I''m > wondering about. I''m using snapshots on another Solaris > system on a different hardware not connected to this one. And > the 3 snapshots on this systems are only rarely created and > not within the two huge filesystems mentioned above. > > > > And sparse files wouldn''t claim usage space shown by "zfs > list", would they? > > > > All output as requested. As "all" wasn''t accepted by this > version of the zfs tool, I used each value separatly. > > Some customer and product names are replaced with "...". > > > > bash-3.00# zfs list -r -t filesystem zpool1 > > NAME USED AVAIL REFER MOUNTPOINT > > zpool1 15.9T 985G 29.8K /zpool1 > > zpool1/archiv 769G 985G 769G /export/archiv > > zpool1/cds 74.5G 985G 74.5G /export/cds > > zpool1/docs 419M 985G 419M /export/docs > > zpool1/fi... 133G 985G 30.2G /export/home/fs4test/fi... > > zpool1/fs4export 5.35G 985G 5.35G none > > zpool1/hdfs 359G 141G 359G /export/hdfs > > zpool1/home 690G 985G 572G /export/home > > zpool1/home/fs41hd1 118G 985G 64.4G > /export/home/fs41hd/sp.../fi... > > zpool1/linuxbackup_electra 18.0G 82.0G 18.0G > /export/linuxbackup/electra > > zpool1/macbackup 843G 985G 843G /export/macbackup > > zpool1/n... 2.13T 985G 489G /export/home/n... > > zpool1/n.../archiv 521G 985G 521G > /export/home/n.../archiv > > zpool1/n.../backup 586G 985G 586G > /export/home/n.../backup > > zpool1/pan_fsexport 278G 422G 278G > /export/pan_fsexport > > zpool1/tmp 39.9G 985G 39.9G /export/tmp > > zpool1/vmarchiv 510G 985G 510G /export/vmarchiv > > zpool1/vmwarersync 6.39T 985G 6.39T > /export/archiv/VMs/rsync > > zpool1/vmwarevcb 3.75T 985G 3.75T > /export/archiv/VMs/vcb > > > > bash-3.00# zfs list -r -t snapshot zpool1 > > NAME USED AVAIL REFER MOUNTPOINT > > zpool1/fi... at jmeter 103G - 133G - > > zpool1/home/fs41hd1 at start 53.4G - 86.0G - > > zpool1/n... at backup 585G - 586G - > > zpool1/n... at backup42 45.8M - 489G - > > > > bash-3.00# zfs list -r -t volume zpool1 > > NAME USED AVAIL REFER MOUNTPOINT > > zpool1/swap 7.05G 1010G 7.05G - > > > > bash-3.00# zfs get all zpool1/vmwarersync > > NAME PROPERTY VALUE > SOURCE > > zpool1/vmwarersync type filesystem - > > zpool1/vmwarersync creation Thu Feb 5 10:51 2009 - > > zpool1/vmwarersync used 6.39T - > > zpool1/vmwarersync available 985G - > > zpool1/vmwarersync referenced 6.39T - > > zpool1/vmwarersync compressratio 1.00x - > > zpool1/vmwarersync mounted yes - > > zpool1/vmwarersync quota none > default > > zpool1/vmwarersync reservation none > default > > zpool1/vmwarersync recordsize 128K > default > > zpool1/vmwarersync mountpoint > /export/archiv/VMs/rsync local > > zpool1/vmwarersync sharenfs root=esx? > local > > zpool1/vmwarersync checksum on > default > > zpool1/vmwarersync compression off > default > > zpool1/vmwarersync atime on > default > > zpool1/vmwarersync devices on > default > > zpool1/vmwarersync exec on > default > > zpool1/vmwarersync setuid on > default > > zpool1/vmwarersync readonly off > default > > zpool1/vmwarersync zoned off > default > > zpool1/vmwarersync snapdir hidden > default > > zpool1/vmwarersync aclmode groupmask > default > > zpool1/vmwarersync aclinherit restricted > default > > zpool1/vmwarersync canmount on > default > > zpool1/vmwarersync shareiscsi off > default > > zpool1/vmwarersync xattr on > default > > zpool1/vmwarersync copies 1 > default > > zpool1/vmwarersync version 1 - > > zpool1/vmwarersync utf8only off - > > zpool1/vmwarersync normalization none - > > zpool1/vmwarersync casesensitivity sensitive - > > zpool1/vmwarersync vscan off > default > > zpool1/vmwarersync nbmand off > default > > zpool1/vmwarersync sharesmb off > default > > zpool1/vmwarersync refquota none > default > > zpool1/vmwarersync refreservation none > default > > > > bash-3.00# zfs get all zpool1/vmwarevcb > > NAME PROPERTY VALUE > SOURCE > > zpool1/vmwarevcb type filesystem - > > zpool1/vmwarevcb creation Tue Feb 3 11:10 2009 - > > zpool1/vmwarevcb used 3.75T - > > zpool1/vmwarevcb available 985G - > > zpool1/vmwarevcb referenced 3.75T - > > zpool1/vmwarevcb compressratio 1.00x - > > zpool1/vmwarevcb mounted yes - > > zpool1/vmwarevcb quota none > default > > zpool1/vmwarevcb reservation none > default > > zpool1/vmwarevcb recordsize 128K > default > > zpool1/vmwarevcb mountpoint /export/archiv/VMs/vcb > local > > zpool1/vmwarevcb sharenfs root=esx? > local > > zpool1/vmwarevcb checksum on > default > > zpool1/vmwarevcb compression off > default > > zpool1/vmwarevcb atime on > default > > zpool1/vmwarevcb devices on > default > > zpool1/vmwarevcb exec on > default > > zpool1/vmwarevcb setuid on > default > > zpool1/vmwarevcb readonly off > default > > zpool1/vmwarevcb zoned off > default > > zpool1/vmwarevcb snapdir hidden > default > > zpool1/vmwarevcb aclmode groupmask > default > > zpool1/vmwarevcb aclinherit restricted > default > > zpool1/vmwarevcb canmount on > default > > zpool1/vmwarevcb shareiscsi off > default > > zpool1/vmwarevcb xattr on > default > > zpool1/vmwarevcb copies 1 > default > > zpool1/vmwarevcb version 1 - > > zpool1/vmwarevcb utf8only off - > > zpool1/vmwarevcb normalization none - > > zpool1/vmwarevcb casesensitivity sensitive - > > zpool1/vmwarevcb vscan off > default > > zpool1/vmwarevcb nbmand off > default > > zpool1/vmwarevcb sharesmb off > default > > zpool1/vmwarevcb refquota none > default > > zpool1/vmwarevcb refreservation none > default > > > > > -- > Erik Trimble > Java System Support > Mailstop: usca22-123 > Phone: x17195 > Santa Clara, CA > > >