Hi, We have a server running b134. The server runs xen and uses a vdev as the storage. The xen image is running nevada 134. I took a snapshot last night to move the xen image to another server. NAME USED AVAIL REFER MOUNTPOINT vpool/host/snv_130 32.8G 11.3G 37.7G - vpool/host/snv_130 at 2010-03-31 3.27G - 13.8G - vpool/host/snv_130 at 2010-08-03 436M - 37.7G - It''s also worth noting that vpool/host/snv_130 is a clone at least two other snapshots. I then did a zfs send of vpool/host/snv_130 at 2010-08-03 and got a 39GB file. A zfs send of vpool/host/snv_130 at 2010-03-31 gave a file of 15GB. I don''t understand why the file is 39GB since df -h inside of the xen image drive vpool/host/snv_130 shows: Filesystem size used avail capacity Mounted on rpool/ROOT/snv_130 39G 12G 22G 35% / It would be nice if the zfs send file would be roughly the same size as the space used inside of xen machine. Karl CONFIDENTIALITY NOTICE: This communication (including all attachments) is confidential and is intended for the use of the named addressee(s) only and may contain information that is private, confidential, privileged, and exempt from disclosure under law. All rights to privilege are expressly claimed and reserved and are not waived. Any use, dissemination, distribution, copying or disclosure of this message and any attachments, in whole or in part, by anyone other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately, delete this communication from all data storage devices and destroy all hard copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100804/59d5cfb0/attachment.html>
On 04 August, 2010 - Karl Rossing sent me these 5,4K bytes:> Hi, > > We have a server running b134. The server runs xen and uses a vdev as > the storage. > > The xen image is running nevada 134. > > I took a snapshot last night to move the xen image to another server. > > NAME USED AVAIL REFER MOUNTPOINT > vpool/host/snv_130 32.8G 11.3G 37.7G - > vpool/host/snv_130 at 2010-03-31 3.27G - 13.8G - > vpool/host/snv_130 at 2010-08-03 436M - 37.7G - > > It''s also worth noting that vpool/host/snv_130 is a clone at least two > other snapshots. > > I then did a zfs send of vpool/host/snv_130 at 2010-08-03 and got a 39GB file. > A zfs send of vpool/host/snv_130 at 2010-03-31 gave a file of 15GB.This is probably data + metadata or similar.> I don''t understand why the file is 39GB since df -h inside of the xen > image drive vpool/host/snv_130 shows: > Filesystem size used avail capacity Mounted on > rpool/ROOT/snv_130 39G 12G 22G 35% / > > It would be nice if the zfs send file would be roughly the same size as > the space used inside of xen machine.The filesystem on the inside might have touched all the blocks, but not informed the outer ZFS (because it can''t) that some blocks are freed. One way of making it smaller is to enable compression on the outer zvol, disable compression on the inner filesystem and then fill the inner filesystem with null (dd if=/dev/zero of=file bs=1024k) and remove that file, then remove compression (if you want). This is just a temporary thing, as the filesystem will be used on the inside (with Copy on Write), the outer one will grow back again. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Thomas, Enabling compression and filling the inner file-system with null fixed the problem. I think I might leave compression on. I still need to do more testing on that. Thanks! On 08/05/10 03:15, Tomas ?gren wrote:> On 04 August, 2010 - Karl Rossing sent me these 5,4K bytes: > >> Hi, >> >> We have a server running b134. The server runs xen and uses a vdev as >> the storage. >> >> The xen image is running nevada 134. >> >> I took a snapshot last night to move the xen image to another server. >> >> NAME USED AVAIL REFER MOUNTPOINT >> vpool/host/snv_130 32.8G 11.3G 37.7G - >> vpool/host/snv_130 at 2010-03-31 3.27G - 13.8G - >> vpool/host/snv_130 at 2010-08-03 436M - 37.7G - >> >> It''s also worth noting that vpool/host/snv_130 is a clone at least two >> other snapshots. >> >> I then did a zfs send of vpool/host/snv_130 at 2010-08-03 and got a 39GB file. >> A zfs send of vpool/host/snv_130 at 2010-03-31 gave a file of 15GB. > This is probably data + metadata or similar. > >> I don''t understand why the file is 39GB since df -h inside of the xen >> image drive vpool/host/snv_130 shows: >> Filesystem size used avail capacity Mounted on >> rpool/ROOT/snv_130 39G 12G 22G 35% / >> >> It would be nice if the zfs send file would be roughly the same size as >> the space used inside of xen machine. > The filesystem on the inside might have touched all the blocks, but not > informed the outer ZFS (because it can''t) that some blocks are freed. > One way of making it smaller is to enable compression on the outer > zvol, disable compression on the inner filesystem and then fill the > inner filesystem with null (dd if=/dev/zero of=file bs=1024k) and remove > that file, then remove compression (if you want). This is just a > temporary thing, as the filesystem will be used on the inside (with Copy > on Write), the outer one will grow back again. > > /TomasCONFIDENTIALITY NOTICE: This communication (including all attachments) is confidential and is intended for the use of the named addressee(s) only and may contain information that is private, confidential, privileged, and exempt from disclosure under law. All rights to privilege are expressly claimed and reserved and are not waived. Any use, dissemination, distribution, copying or disclosure of this message and any attachments, in whole or in part, by anyone other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately, delete this communication from all data storage devices and destroy all hard copies.