* JR Dalrymple (jr at jrssite.com) wrote:> I''m pretty new to ZFS and OpenSolaris as a whole. I am an
experienced
> storage administrator, however my storage equipment has typically been
> NetApp or EMC branded. I administer NetApp FAS2000 and FAS3000 series
> boxes to host a VMware only virtual infrastructure so I am versed on a
> pretty high level at storage provisioning for a virtual environment.
>
> My problem is unexpected disk usage on deduplicated datasets holding
> little more than VMDKs. I experimented with deduplication on ZFS and
> compared it to deduplication on NetApp and found basically identical
> returns on a mix of backup data and user data. I was pretty excited to
> put some VMDKs of my own on to a system of my own. I have been
> disappointed with the actual results however :(
>
> Upon building VMs on this storage I found the data to consume an "as
> expected" OS only amount of disk space. As time went on the VMDKs
> filled out to consume their entire allocated disk space. I was hoping
> I could recover the lost physical disk space by using sdelete on the
> guests to zero out unused space on the disks, however this didn''t
> happen as per du or df on the storage host. After zeroing unused disk
> space I was really hoping that the VMDKs would only consume the amount
> of disk actually filled by the guest as they did when the VMs were
> fresh. I have properly aligned VMDKs so I don''t think that the
problem
> lies there.
>
> I''m not sure what information to offer that might be helpful
except
> the following (nfs0 is the dataset I''m working with primarily):
>
> jrdalrym at yac-stor1:~$ uname -a SunOS yac-stor1 5.11 snv_134 i86pc i386
> i86pc Solaris jrdalrym at yac-stor1:~$ zpool list NAME SIZE ALLOC
> FREE CAP DEDUP HEALTH ALTROOT rpool 540G 228G 312G 42%
> 1.26x ONLINE - jrdalrym at yac-stor1:~$ zfs list NAME
> USED AVAIL REFER MOUNTPOINT rpool 329G
> 238G 88.5K /rpool rpool/ROOT 7.97G 238G 19K
> legacy rpool/ROOT/opensolaris 8.41M 238G 2.85G /
> rpool/ROOT/opensolaris-1 43.5M 238G 3.88G /
> rpool/ROOT/opensolaris-2 7.92G 238G 5.52G / rpool/dump
> 2.00G 238G 2.00G - rpool/export 1.03G 238G 23K
> /export rpool/export/home 1.03G 238G 23K /export/home
> rpool/export/home/jrdalrym 1.03G 238G 1.03G /export/home/jrdalrym
> rpool/iscsi 103G 238G 21K /rpool/iscsi
> rpool/iscsi/iscsi0 103G 301G 40.5G - rpool/nfs0
> 153G 87.3G 153G /rpool/nfs0 rpool/nfs1 49.6G
> 238G 40.5G /rpool/nfs1 rpool/nfs2 9.99G 50.0G
> 9.94G /rpool/nfs2 rpool/swap 2.00G 240G 100M -
> jrdalrym at yac-stor1:~$ zfs get all rpool/nfs0 NAME PROPERTY
> VALUE SOURCE rpool/nfs0 type
> filesystem - rpool/nfs0 creation
> Wed Aug 25 20:28 2010 - rpool/nfs0 used
> 153G - rpool/nfs0 available
> 87.3G - rpool/nfs0 referenced
> 153G - rpool/nfs0 compressratio
> 1.00x - rpool/nfs0 mounted
> yes - rpool/nfs0 quota
> 240G local rpool/nfs0 reservation
> none default rpool/nfs0 recordsize
> 128K default rpool/nfs0 mountpoint
> /rpool/nfs0 default rpool/nfs0 sharenfs
> root=@192.168.10.0/24 local rpool/nfs0 checksum
> on default rpool/nfs0 compression
> off default rpool/nfs0 atime
> on default rpool/nfs0 devices
> on default rpool/nfs0 exec
> on default rpool/nfs0 setuid
> on default rpool/nfs0 readonly
> off default rpool/nfs0 zoned
> off default rpool/nfs0 snapdir
> hidden default rpool/nfs0 aclmode
> groupmask default rpool/nfs0 aclinherit
> restricted default rpool/nfs0 canmount
> on default rpool/nfs0 shareiscsi
> off default rpool/nfs0 xattr
> on default rpool/nfs0 copies
> 1 default rpool/nfs0 version
> 4 - rpool/nfs0 utf8only
> off - rpool/nfs0 normalization
> none - rpool/nfs0 casesensitivity
> sensitive - rpool/nfs0 vscan
> off default rpool/nfs0 nbmand
> off default rpool/nfs0 sharesmb
> off default rpool/nfs0 refquota
> none default rpool/nfs0 refreservation
> none default rpool/nfs0 primarycache
> all default rpool/nfs0 secondarycache
> all default rpool/nfs0 usedbysnapshots
> 0 - rpool/nfs0 usedbydataset
> 153G - rpool/nfs0 usedbychildren
> 0 - rpool/nfs0 usedbyrefreservation
> 0 - rpool/nfs0 logbias
> latency default rpool/nfs0 dedup
> on local rpool/nfs0 mlslabel
> none default rpool/nfs0
> com.sun:auto-snapshot true local
As you can see, you have auto-snapshot''s turned on for this filesystem.
ZFS doesn''t go back and remove data from snapshots (that defeats the
purpose), so any VM''s that you have ''cleaned up'' will
still not be
''cleaned up'' in any snapshots that they exist in (naturally).
That''s
where I''d look first.
Cheers,
--
Glenn