Jim Sloey
2010-Oct-07 17:24 UTC
[zfs-discuss] Help - Deleting files from a large pool results in less free space!
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and have not received disk for our SAN. Using df -h results in: Filesystem size used avail capacity Mounted on pool1 20T 20T 55G 100% /pool1 pool2 9.1T 8.0T 497G 95% /pool2 The idea was to temporarily move a group of big directories to another zfs pool that had space available and link from the old location to the new. cp ?r /pool1/000 /pool2/ mv /pool1/000 /pool1/000d ln ?s /pool2/000 /pool1/000 rm ?rf /pool1/000 Using df -h after the relocation results in: Filesystem size used avail capacity Mounted on pool1 20T 19T 15G 100% /pool1 pool2 9.1T 8.3T 221G 98% /pool2 Using zpool list says: NAME SIZE USED AVAIL CAP pool1 19.9T 19.6T 333G 98% pool2 9.25T 8.89T 369G 96% Using zfs get all pool1 produces: NAME PROPERTY VALUE SOURCE pool1 type filesystem - pool1 creation Tue Dec 18 11:37 2007 - pool1 used 19.6T - pool1 available 15.3G - pool1 referenced 19.5T - pool1 compressratio 1.00x - pool1 mounted yes - pool1 quota none default pool1 reservation none default pool1 recordsize 128K default pool1 mountpoint /pool1 default pool1 sharenfs on local pool1 checksum on default pool1 compression off default pool1 atime on default pool1 devices on default pool1 exec on default pool1 setuid on default pool1 readonly off default pool1 zoned off default pool1 snapdir hidden default pool1 aclmode groupmask default pool1 aclinherit secure default pool1 canmount on default pool1 shareiscsi off default pool1 xattr on default pool1 replication:locked true local Has anyone experienced this or know where to look for a solution to recovering space? -- This message posted from opensolaris.org
Remco Lengers
2010-Oct-07 17:32 UTC
[zfs-discuss] Help - Deleting files from a large pool results in less free space!
any snapshots? *zfs list -t snapshot* ..Remco On 10/7/10 7:24 PM, Jim Sloey wrote:> I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and have not received disk for our SAN. Using df -h results in: > Filesystem size used avail capacity Mounted on > pool1 20T 20T 55G 100% /pool1 > pool2 9.1T 8.0T 497G 95% /pool2 > The idea was to temporarily move a group of big directories to another zfs pool that had space available and link from the old location to the new. > cp ?r /pool1/000 /pool2/ > mv /pool1/000 /pool1/000d > ln ?s /pool2/000 /pool1/000 > rm ?rf /pool1/000 > Using df -h after the relocation results in: > Filesystem size used avail capacity Mounted on > pool1 20T 19T 15G 100% /pool1 > pool2 9.1T 8.3T 221G 98% /pool2 > Using zpool list says: > NAME SIZE USED AVAIL CAP > pool1 19.9T 19.6T 333G 98% > pool2 9.25T 8.89T 369G 96% > Using zfs get all pool1 produces: > NAME PROPERTY VALUE SOURCE > pool1 type filesystem - > pool1 creation Tue Dec 18 11:37 2007 - > pool1 used 19.6T - > pool1 available 15.3G - > pool1 referenced 19.5T - > pool1 compressratio 1.00x - > pool1 mounted yes - > pool1 quota none default > pool1 reservation none default > pool1 recordsize 128K default > pool1 mountpoint /pool1 default > pool1 sharenfs on local > pool1 checksum on default > pool1 compression off default > pool1 atime on default > pool1 devices on default > pool1 exec on default > pool1 setuid on default > pool1 readonly off default > pool1 zoned off default > pool1 snapdir hidden default > pool1 aclmode groupmask default > pool1 aclinherit secure default > pool1 canmount on default > pool1 shareiscsi off default > pool1 xattr on default > pool1 replication:locked true local > > Has anyone experienced this or know where to look for a solution to recovering space?-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101007/c851d48e/attachment-0001.html>
taemun
2010-Oct-07 17:35 UTC
[zfs-discuss] Help - Deleting files from a large pool results in less free space!
Forgive me, but isn''t this incorrect: --- mv /pool1/000 /pool1/000d --- rm ?rf /pool1/000 Shouldn''t that last line be rm ?rf /pool1/000d ?? On 8 October 2010 04:32, Remco Lengers <remco at lengers.com> wrote:> any snapshots? > > *zfs list -t snapshot* > > ..Remco > > > > On 10/7/10 7:24 PM, Jim Sloey wrote: > > I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and have not received disk for our SAN. Using df -h results in: > Filesystem size used avail capacity Mounted on > pool1 20T 20T 55G 100% /pool1 > pool2 9.1T 8.0T 497G 95% /pool2 > The idea was to temporarily move a group of big directories to another zfs pool that had space available and link from the old location to the new. > cp ?r /pool1/000 /pool2/ > mv /pool1/000 /pool1/000d > ln ?s /pool2/000 /pool1/000 > rm ?rf /pool1/000 > Using df -h after the relocation results in: > Filesystem size used avail capacity Mounted on > pool1 20T 19T 15G 100% /pool1 > pool2 9.1T 8.3T 221G 98% /pool2 > Using zpool list says: > NAME SIZE USED AVAIL CAP > pool1 19.9T 19.6T 333G 98% > pool2 9.25T 8.89T 369G 96% > Using zfs get all pool1 produces: > NAME PROPERTY VALUE SOURCE > pool1 type filesystem - > pool1 creation Tue Dec 18 11:37 2007 - > pool1 used 19.6T - > pool1 available 15.3G - > pool1 referenced 19.5T - > pool1 compressratio 1.00x - > pool1 mounted yes - > pool1 quota none default > pool1 reservation none default > pool1 recordsize 128K default > pool1 mountpoint /pool1 default > pool1 sharenfs on local > pool1 checksum on default > pool1 compression off default > pool1 atime on default > pool1 devices on default > pool1 exec on default > pool1 setuid on default > pool1 readonly off default > pool1 zoned off default > pool1 snapdir hidden default > pool1 aclmode groupmask default > pool1 aclinherit secure default > pool1 canmount on default > pool1 shareiscsi off default > pool1 xattr on default > pool1 replication:locked true local > > Has anyone experienced this or know where to look for a solution to recovering space? > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101008/593d1d5a/attachment.html>
Jim Sloey
2010-Oct-07 17:55 UTC
[zfs-discuss] Help - Deleting files from a large pool results in less free space!
Yes, you''re correct. There was a typo when I copied to the forum. -- This message posted from opensolaris.org
Jim Sloey
2010-Oct-07 17:59 UTC
[zfs-discuss] Help - Deleting files from a large pool results in less free space!
Yes. We run a snap in cron to a disaster recovery site. NAME USED AVAIL REFER MOUNTPOINT pool1 at 20100930-22:20:00 13.2M - 19.5T - pool1 at 20101001-01:20:00 4.35M - 19.5T - pool1 at 20101001-04:20:00 0 - 19.5T - pool1 at 20101001-07:20:00 0 - 19.5T - pool1 at 20101001-10:20:00 1.87M - 19.5T - pool1 at 20101001-13:20:00 2.93M - 19.5T - pool1 at 20101001-16:20:00 4.68M - 19.5T - pool1 at 20101001-19:20:00 5.47M - 19.5T - pool1 at 20101001-22:20:00 3.33M - 19.5T - pool1 at 20101002-01:20:00 4.98M - 19.5T - pool1 at 20101002-04:20:00 298K - 19.5T - pool1 at 20101002-07:20:00 138K - 19.5T - pool1 at 20101002-10:20:00 1.14M - 19.5T - pool1 at 20101002-13:20:00 228K - 19.5T - pool1 at 20101002-16:20:00 0 - 19.5T - pool1 at 20101002-19:20:00 0 - 19.5T - pool1 at 20101002-22:20:01 110K - 19.5T - pool1 at 20101003-01:20:00 1.39M - 19.5T - pool1 at 20101003-04:20:00 3.67M - 19.5T - pool1 at 20101003-07:20:00 540K - 19.5T - pool1 at 20101003-10:20:00 551K - 19.5T - pool1 at 20101003-13:20:00 640K - 19.5T - pool1 at 20101003-16:20:00 1.72M - 19.5T - pool1 at 20101003-19:20:00 542K - 19.5T - pool1 at 20101003-22:20:00 0 - 19.5T - pool1 at 20101004-01:20:00 0 - 19.5T - pool1 at 20101004-04:20:01 102K - 19.5T - pool1 at 20101004-07:20:00 501K - 19.5T - pool1 at 20101004-10:20:00 2.54M - 19.5T - pool1 at 20101004-13:20:00 5.24M - 19.5T - pool1 at 20101004-16:20:00 4.78M - 19.5T - pool1 at 20101004-19:20:00 3.86M - 19.5T - pool1 at 20101004-22:20:00 4.37M - 19.5T - pool1 at 20101005-01:20:00 7.18M - 19.5T - pool1 at 20101005-04:20:00 0 - 19.5T - pool1 at 20101005-07:20:00 0 - 19.5T - pool1 at 20101005-10:20:00 2.89M - 19.5T - pool1 at 20101005-13:20:00 8.42M - 19.5T - pool1 at 20101005-16:20:00 12.0M - 19.5T - pool1 at 20101005-19:20:00 4.75M - 19.5T - pool1 at 20101005-22:20:00 2.49M - 19.5T - pool1 at 20101006-01:20:00 3.06M - 19.5T - pool1 at 20101006-04:20:00 244K - 19.5T - pool1 at 20101006-07:20:00 182K - 19.5T - pool1 at 20101006-10:20:00 3.16M - 19.5T - pool1 at 20101006-13:20:00 177M - 19.5T - pool1 at 20101006-16:20:00 396M - 19.5T - pool1 at 20101006-22:20:00 282M - 19.5T - pool1 at 20101007-10:20:00 187M - 19.5T - -- This message posted from opensolaris.org
Jim Sloey
2010-Oct-07 18:40 UTC
[zfs-discuss] Help - Deleting files from a large pool results in less free space!
One of us found the following: The presence of snapshots can cause some unexpected behavior when you attempt to free space. Typically, given appropriate permissions, you can remove a file from a full file system, and this action results in more space becoming available in the file system. However, if the file to be removed exists in a snapshot of the file system, then no space is gained from the file deletion. The blocks used by the file continue to be referenced from the snapshot. As a result, the file deletion can consume more disk space, because a new version of the directory needs to be created to reflect the new state of the namespace. This behavior means that you can get an unexpected ENOSPC or EDQUOT when attempting to remove a file. Since we are using snapshots to a remote system, what will be the impact of destroying the snapshots? Since the files we moved are some of the oldest, will we have to start replication to the remote site over again from the beginning? -- This message posted from opensolaris.org
Richard Elling
2010-Oct-10 05:07 UTC
[zfs-discuss] Help - Deleting files from a large pool results in less free space!
On Oct 7, 2010, at 11:40 AM, Jim Sloey wrote:> One of us found the following: > > The presence of snapshots can cause some unexpected behavior when you attempt to free space. Typically, given appropriate permissions, you can remove a file from a full file system, and this action results in more space becoming available in the file system. However, if the file to be removed exists in a snapshot of the file system, then no space is gained from the file deletion. The blocks used by the file continue to be referenced from the snapshot.Yes, as designed.> As a result, the file deletion can consume more disk space, because a new version of the directory needs to be created to reflect the new state of the namespace. This behavior means that you can get an unexpected ENOSPC or EDQUOT when attempting to remove a file.Yes, as designed.> Since we are using snapshots to a remote system, what will be the impact of destroying the snapshots? Since the files we moved are some of the oldest, will we have to start replication to the remote site over again from the beginning?In most cases where we implement this, the remote (backup) system will have more snapshots than the production system. All you really need is a single, common snapshot between the two to re-start an incremental send/receive. -- richard -- OpenStorage Summit, October 25-27, Palo Alto, CA http://nexenta-summit2010.eventbrite.com ZFS and performance consulting http://www.RichardElling.com