I am in the process of upgrading from FreeBSD-8.1 with ZFSv14 to FreeBSD-8.2 with ZFSv15 and, following a crash, have run into a problem with ZFS claiming a snapshot or clone exists that I can''t find. I was transferring a set of snapshots from my primary desktop to a backup host (both ZFSv14) using: zfs send -I zroot/home at 20110210bu -R zroot/home at 20110317bu | \ ssh backup_host zfs recv -vd zroot and whilst that was in progress, I did a ''df -k'' on backup_host. At this point, both the df and the zfs recv wedged unkillably. "zfs list" showed that the last snapshot on the destination system was zroot/home at 20110309, so I did a rollback to it (which reported no error) and ran: zfs send -I zroot/home at 20110309 -R zroot/home at 20110317bu | \ ssh backup_host zfs recv -vd zroot which reported: receiving incremental stream of zroot/home at 20110310 into zroot/home at 20110310 cannot restore to zroot/home at 20110310: destination already exists warning: cannot send ''zroot/home at 20110310'': Broken pipe I cannot find anything by that name (or any snapshots later than zroot/home at 20110309 or any clones) and cannot destroy zroot/home at 20110309: # zfs rollback zroot/home at 20110309 # zfs destroy zroot/home at 20110309 cannot destroy ''zroot/home at 20110309'': dataset already exists # zfs destroy -r zroot/home at 20110309 cannot destroy ''zroot/home at 20110309'': snapshot is cloned no snapshots destroyed # zfs destroy -R zroot/home at 20110309 cannot destroy ''zroot/home at 20110309'': snapshot is cloned no snapshots destroyed # zfs destroy -frR zroot/home at 20110309 cannot destroy ''zroot/home at 20110309'': snapshot is cloned no snapshots destroyed # zfs list -t all |grep home at 20110310 # zfs get all | grep origin # zfs get all | grep home at 20110310 # I have tried rebooting, upgrading the pool from v14 to v15 and export/import without success. Does anyone have any other suggestions? "zpool history -i" looks like: 2011-03-17.08:02:57 zfs rollback zroot/home at 20110210bu 2011-03-17.08:02:59 zfs recv -vd zroot 2011-03-17.08:02:59 [internal replay_inc_sync txg:872817696] dataset = 973 2011-03-17.08:02:59 [internal reservation set txg:872817697] 0 dataset = 469 ... 2011-03-17.08:09:41 [internal snapshot txg:872817974] dataset = 1203 2011-03-17.08:09:42 [internal replay_inc_sync txg:872817975] dataset = 1208 2011-03-17.08:09:42 [internal reservation set txg:872817976] 0 dataset = 469 2011-03-17.08:09:42 [internal property set txg:872817977] compression=10 dataset = 469 2011-03-17.08:09:42 [internal property set txg:872817977] mountpoint=/home dataset = 469 2011-03-17.08:09:50 [internal destroy_begin_sync txg:872817980] dataset = 1208 2011-03-17.08:09:51 [internal destroy txg:872817983] dataset = 1208 2011-03-17.08:09:51 [internal reservation set txg:872817983] 0 dataset = 0 2011-03-17.08:09:51 [internal snapshot txg:872817984] dataset = 1212 2011-03-17.08:09:52 [internal replay_inc_sync txg:872817985] dataset = 1217 2011-03-17.08:09:52 [internal reservation set txg:872817986] 0 dataset = 469 2011-03-17.08:09:52 [internal property set txg:872817987] compression=10 dataset = 469 2011-03-17.08:09:52 [internal property set txg:872817987] mountpoint=/home dataset = 469 <system wedged here> 2011-03-17.08:35:01 [internal rollback txg:872818038] dataset = 469 2011-03-17.08:35:01 zfs rollback zroot/home at 20110309 2011-03-17.08:35:14 zfs recv -vd zroot 2011-03-17.08:36:37 [internal pool scrub txg:872818059] func=1 mintxg=0 maxtxg=872818059 2011-03-17.08:36:41 zpool scrub zroot 2011-03-17.09:17:27 [internal pool scrub done txg:872818513] complete=1 2011-03-17.09:19:44 [internal rollback txg:872818542] dataset = 469 2011-03-17.09:19:45 zfs rollback zroot/home at 20110309 2011-03-17.10:51:38 [internal rollback txg:872819603] dataset = 469 2011-03-17.10:51:39 zfs rollback zroot/home at 20110309 2011-03-17.10:54:11 zpool upgrade zroot 2011-03-17.10:59:12 [internal rollback txg:872819688] dataset = 469 2011-03-17.10:59:12 zfs rollback zroot/home at 20110309 2011-03-17.11:16:38 [internal rollback txg:872819895] dataset = 469 2011-03-17.11:16:39 zfs rollback zroot/home at 20110309 2011-03-17.11:16:54 zpool export zroot 2011-03-17.11:17:31 zpool import zroot 2011-03-17.11:30:13 [internal rollback txg:872819992] dataset = 469 2011-03-17.11:30:13 zfs rollback zroot/home at 20110309 2011-03-17.12:01:02 zfs recv -vd zroot 2011-03-17.12:03:57 [internal rollback txg:872820399] dataset = 469 2011-03-17.12:03:57 zfs rollback zroot/home at 20110309 -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110317/56166130/attachment-0001.bin>
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Peter Jeremy > > I am in the process of upgrading from FreeBSD-8.1 with ZFSv14 to > FreeBSD-8.2 with ZFSv15 and, following a crash, have run into a > problem with ZFS claiming a snapshot or clone exists that I can''t > find.This was a bug that was fixed somewhere between zpool v15 and v22. If there is an incremental send in progress, and the system crashes for some reason, then you have this "invisible" clone laying around somewhere... It prevents any further incrementals from succeeding... It hogs disk space ... It''s difficult to find and destroy unless you know what you''re looking for. To find it, run zdb -d, and search for something with a % Something like: zdb -d tank | grep % And then you can zfs destroy the thing. P.S. Every time I did this, the zfs destroy would complete with some sort of error message, but then if you searched for the thing again, you would see that it actually completed successfully. P.S. If your primary goal is to use ZFS, you would probably be better switching to nexenta or openindiana or solaris 11 express, because they all support ZFS much better than freebsd. If instead, your primary goal is to do something free-bsd-ish, and it''s just coincidence that an old version of ZFS happens to be the best filesystem available in freebsd, so be it. Freebsd is good in its own ways... Even an old version of ZFS is better than EXT3 or UFS. ;-)
On Wed, Mar 16, 2011 at 7:23 PM, Edward Ned Harvey <opensolarisisdeadlongliveopensolaris at nedharvey.com> wrote:> P.S. ?If your primary goal is to use ZFS, you would probably be better > switching to nexenta or openindiana or solaris 11 express, because they all > support ZFS much better than freebsd. ?If instead, your primary goal is to > do something free-bsd-ish, and it''s just coincidence that an old version of > ZFS happens to be the best filesystem available in freebsd, so be it. > Freebsd is good in its own ways... ?Even an old version of ZFS is better > than EXT3 or UFS. ? ;-)FreeBSD 9-CURRENT supports ZFSv28. And there are patches available for testing ZFSv28 on FreeBSD 8-STABLE. Let''s keep the OS pot shots to a minimum, eh? -- Freddie Cash fjwcash at gmail.com
> From: Freddie Cash [mailto:fjwcash at gmail.com] > > On Wed, Mar 16, 2011 at 7:23 PM, Edward Ned Harvey > <opensolarisisdeadlongliveopensolaris at nedharvey.com> wrote: > > P.S. If your primary goal is to use ZFS, you would probably be better > > switching to nexenta or openindiana or solaris 11 express, because they all > > support ZFS much better than freebsd. If instead, your primary goal is to > > do something free-bsd-ish, and it''s just coincidence that an old version of > > ZFS happens to be the best filesystem available in freebsd, so be it. > > Freebsd is good in its own ways... Even an old version of ZFS is better > > than EXT3 or UFS. ;-) > > FreeBSD 9-CURRENT supports ZFSv28. > > And there are patches available for testing ZFSv28 on FreeBSD 8-STABLE. > > Let''s keep the OS pot shots to a minimum, eh?There was no OS pot-shot. There was, however, an EXT3 pot-shot, which I stand by.
On 2011-Mar-17 10:23:01 +0800, Edward Ned Harvey <opensolarisisdeadlongliveopensolaris at nedharvey.com> wrote:>To find it, run zdb -d, and search for something with a % >Something like: zdb -d tank | grep % > >And then you can zfs destroy the thing.Thanks, that worked.> P.S. Every time I did this, the >zfs destroy would complete with some sort of error message, but then if you >searched for the thing again, you would see that it actually completed >successfully.Likewise, I had ''zfs destroy'' whinge but the offending clone was gone.>P.S. If your primary goal is to use ZFS, you would probably be better >switching to nexenta or openindiana or solaris 11 express, because they all >support ZFS much better than freebsd.I''m primarily interested in running FreeBSD and will be upgrading to ZFSv28 once it''s been shaken out a bit longer. -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110318/cfb701e7/attachment.bin>