Hi, Can ZFS snapshot be performed at zvol size of 100GB ? I have no problem with the zvol snapshot at size of 1GB or 10GB. Thanks, Paul -- This message posted from opensolaris.org
Are you asking if zvol size is 100GB can you do a snapshot of it, or can the snapshot grow to over 100GB? I have a x4500 with nightly snapshots being done on a 7 terabyte filesystem (each nightly snapshot is about 20GB). I don''t believe there is a functional limit to the size of the snapshot that can be created from a filesystem. Dave -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Paul Sent: Wednesday, November 12, 2008 6:41 PM To: zfs-discuss at opensolaris.org Subject: [zfs-discuss] zvol snapshot at size 100G Hi, Can ZFS snapshot be performed at zvol size of 100GB ? I have no problem with the zvol snapshot at size of 1GB or 10GB. Thanks, Paul -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Paul wrote:> Hi, > > Can ZFS snapshot be performed at zvol size of 100GB ?Yes and if it doesn''t work then it is a bug. Do you actually have a problem or are you just making assumptions that there could be one ? If so what is the basis for the (incorrect) assumption ?> I have no problem with the zvol snapshot at size of 1GB or 10GB.Good. -- Darren J Moffat
I apologize for lack of info regarding to previous post. # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT gwvm_zpool 3.35T 3.16T 190G 94% ONLINE - rpool 135G 27.5G 107G 20% ONLINE - ... # zfs list ... gwvm_zpool/gwpo19stby 100G 504M 18K /gwvm_zpool/gwpo19stby gwvm_zpool/gwpo19stby at Nov110852snapshot 0 - 18K - gwvm_zpool/gwpo19stby/po19stby-vdisk1 100G 34.6G 65.9G - ... Issue: run # zfs snapshot -r gwvm_zpool/gwpo19stby at Nov1308752008snapshot cannot create snapshot ''gwvm_zpool/gwpo19stby/po19stby-vdisk1 at Nov1308752008snapshot'': out of space no snapshots were created # zfs snapshot wvm_zpool/gwpo19stby/po19stby-vdisk1 at Nov1308522008snapshot cannot create snapshot ''gwvm_zpool/gwpo19stby/po19stby-vdisk1 at Nov1308522008snapshot'': out of space The gwvm_zpool pool still has 190 GB available. Snapshot does not consume the storage space. When I performed snapshot with different zvol size (1GB, 10GB), I have no problem with the "out of space" message. -- This message posted from opensolaris.org
Are you sure that you don''t have any refreservations? --matt Paul wrote:> I apologize for lack of info regarding to previous post. > > # zpool list > > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > gwvm_zpool 3.35T 3.16T 190G 94% ONLINE - > rpool 135G 27.5G 107G 20% ONLINE - > ... > > # zfs list > > ... > gwvm_zpool/gwpo19stby 100G 504M 18K /gwvm_zpool/gwpo19stby > gwvm_zpool/gwpo19stby at Nov110852snapshot 0 - 18K - > gwvm_zpool/gwpo19stby/po19stby-vdisk1 100G 34.6G 65.9G - > ... > > Issue: > > run > > # zfs snapshot -r gwvm_zpool/gwpo19stby at Nov1308752008snapshot > cannot create snapshot ''gwvm_zpool/gwpo19stby/po19stby-vdisk1 at Nov1308752008snapshot'': out of space > no snapshots were created > > > # zfs snapshot wvm_zpool/gwpo19stby/po19stby-vdisk1 at Nov1308522008snapshot > cannot create snapshot ''gwvm_zpool/gwpo19stby/po19stby-vdisk1 at Nov1308522008snapshot'': out of space > > > The gwvm_zpool pool still has 190 GB available. Snapshot does not consume the storage space. When I performed snapshot with different zvol size (1GB, 10GB), I have no problem with the "out of space" message. >
Just to try this out, I created a 9g zpool and a 5g volume in that zpool. Then I used dd to write to every block of the volume. Taking a snapshot of the volume at that point attempts to reserve an additional 5g, which fails. With 1g volumes we see it in action: bash-3.00# zpool create tank c0d1s0 bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT tank 89.5K 9.84G 1K /tank bash-3.00# zfs create -V 1g tank/vol bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT tank 1.00G 8.84G 18K /tank tank/vol 1G 9.84G 16K - bash-3.00# dd if=/dev/zero of=/dev/zvol/dsk/tank/vol bs=128k write: No such device or address 8193+0 records in 8193+0 records out bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT tank 1.00G 8.84G 18K /tank tank/vol 1G 8.87G 993M - bash-3.00# zfs snapshot tank/vol at snap bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT tank 2.00G 7.84G 18K /tank tank/vol 2.00G 8.84G 1.00G - tank/vol at snap 0 - 1.00G - bash-3.00# So that''s probably what Paul is running into. On Thu, 13 Nov 2008, Matthew Ahrens wrote:> Are you sure that you don''t have any refreservations? > > --matt > > Paul wrote: >> I apologize for lack of info regarding to previous post. >> >> # zpool list >> >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> gwvm_zpool 3.35T 3.16T 190G 94% ONLINE - >> rpool 135G 27.5G 107G 20% ONLINE - >> ... >> >> # zfs list >> >> ... >> gwvm_zpool/gwpo19stby 100G 504M 18K /gwvm_zpool/gwpo19stby >> gwvm_zpool/gwpo19stby at Nov110852snapshot 0 - 18K - >> gwvm_zpool/gwpo19stby/po19stby-vdisk1 100G 34.6G 65.9G - >> ... >> >> Issue: >> >> run >> >> # zfs snapshot -r gwvm_zpool/gwpo19stby at Nov1308752008snapshot >> cannot create snapshot ''gwvm_zpool/gwpo19stby/po19stby-vdisk1 at Nov1308752008snapshot'': out of space >> no snapshots were created >> >> >> # zfs snapshot wvm_zpool/gwpo19stby/po19stby-vdisk1 at Nov1308522008snapshot >> cannot create snapshot ''gwvm_zpool/gwpo19stby/po19stby-vdisk1 at Nov1308522008snapshot'': out of space >> >> >> The gwvm_zpool pool still has 190 GB available. Snapshot does not consume the storage space. When I performed snapshot with different zvol size (1GB, 10GB), I have no problem with the "out of space" message. >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >Regards, markm
On Nov 13, 2008, at 12:37 PM, Matthew Ahrens wrote:> Are you sure that you don''t have any refreservations?Oh, right, on pools with version >= SPA_VERSION_REFRESERVATION we add a refreservation for zvols instead of a regular reservation. So a 100G zvol will have a 100G refreservation set at creation time. -Chris
On Nov 13, 2008, at 1:45 PM, Chris Kirby wrote:> Oh, right, on pools with version >= SPA_VERSION_REFRESERVATION > we add a refreservation for zvols instead of a regular reservation. > > So a 100G zvol will have a 100G refreservation set at creation > time.Just to clarify this a bit, the reason why we do this is so that snapshots don''t steal space from the zvol. One consequence of this is, in order to take a snapshot of a zvol (or any dataset with a refreservation), there must be enough free space in the pool to accommodate the possibility that every block (bounded by the size of the refreservation) that is not already part of a snapshot (the "referenced" or "REFER" bytes) might become dirty. -Chris
First, I would like to thank everyone for response. Second, here is the output for the clarification # zfs list ... NAME USED AVAIL REFER MOUNTPOINT gwvm_zpool/gwpo19stby 100G 2.49G 18K /gwvm_zpool/gwpo19stby gwvm_zpool/gwpo19stby/po19stby-vdisk1 100G 36.6G 65.9G - # zfs list -o refreservation gwvm_zpool/gwpo19stby/po19stby-vdisk1 REFRESERV 100G Question: Although the REFRESERV is 100 GB, I have the gwvm_zpool of 190GB available. Am I missing the space here ? # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT gwvm_zpool 3.35T 3.16T 190G 94% ONLINE - rpool 135G 27.5G 107G 20% ONLINE - ... I just performed the testing before I proposed to buy any storage server or build it with Solaris10/JBOD system from SUN. Less than $1000/TB for tier-2 storage serving iSCSI, NFS... with good I/O performance, plus all other options are good . -- This message posted from opensolaris.org