Edward Ned Harvey
2010-Jun-02 14:42 UTC
[zfs-discuss] cannot destroy ... dataset already exists
This is the problem: [root at nasbackup backup-scripts]# zfs destroy storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30 cannot destroy ''storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30'': dataset already exists This is apparently a common problem. It''s happened to me twice already, and the third time now. Each time it happens, it''s on the "backup" server, so fortunately, I have total freedom to do whatever I want, including destroy the pool. The previous two times, I googled around, basically only found "destroy the pool" as a solution, and I destroyed the pool. This time, I would like to dedicate a little bit of time and resource to finding the cause of the problem, so hopefully this can be fixed for future users, including myself. This time I also found "apply updates and repeat your attempt to destroy the snapshot" ... So I applied updates, and repeated. But no improvement. The OS was sol 10u6, but now it''s fully updated. Problem persists. I''ve also tried exporting and importing the pool. Somebody on the Internet suspected the problem is somehow aftermath of killing a "zfs send" or receive. This is distinctly possible, as I''m sure that''s happened on my systems. But there is currently no send or receive being killed ... Any such occurrence is long since past, and even beyond reboots and such. I do not use clones. There are no clones of this snapshot anywhere, and there never have been. I do have other snapshots, which were incrementally received based on this one. But that shouldn''t matter, right? I have not yet called support, although we do have a support contract. Any suggestions? FYI: [root at nasbackup backup-scripts]# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 19.3G 126G 34K /rpool rpool/ROOT 16.3G 126G 21K legacy rpool/ROOT/nasbackup_slash 16.3G 126G 16.3G / rpool/dump 1.00G 126G 1.00G - rpool/swap 2.00G 127G 1.08G - storagepool 1.28T 4.06T 34.4K /storage storagepool/nas-lyricpool 1.27T 4.06T 1.13T /storage/nas-lyricpool storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30 94.1G - 1.07T - storagepool/nas-lyricpool at daily-2010-06-01-00-00-00 0 - 1.13T - storagepool/nas-rpool-ROOT-nas_slash 8.65G 4.06T 8.65G /storage/nas-rpool-ROOT-nas_slash storagepool/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 0 - 8.65G - zfs-external1 1.13T 670G 24K /zfs-external1 zfs-external1/nas-lyricpool 1.12T 670G 1.12T /zfs-external1/nas-lyricpool zfs-external1/nas-lyricpool at daily-2010-06-01-00-00-00 0 - 1.12T - zfs-external1/nas-rpool-ROOT-nas_slash 8.60G 670G 8.60G /zfs-external1/nas-rpool-ROOT-nas_slash zfs-external1/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 0 - 8.60G - And [root at nasbackup ~]# zfs get origin NAME PROPERTY VALUE SOURCE rpool origin - - rpool/ROOT origin - - rpool/ROOT/nasbackup_slash origin - - rpool/dump origin - - rpool/swap origin - - storagepool origin - - storagepool/nas-lyricpool origin - - storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30 origin - - storagepool/nas-lyricpool at daily-2010-06-01-00-00-00 origin - - storagepool/nas-lyricpool at daily-2010-06-02-00-00-00 origin - - storagepool/nas-rpool-ROOT-nas_slash origin - - storagepool/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 origin - - storagepool/nas-rpool-ROOT-nas_slash at daily-2010-06-02-00-00-00 origin - - zfs-external1 origin - - zfs-external1/nas-lyricpool origin - - zfs-external1/nas-lyricpool at daily-2010-06-01-00-00-00 origin - - zfs-external1/nas-rpool-ROOT-nas_slash origin - - zfs-external1/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 origin - - -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100602/a42bc89c/attachment.html>
Is the pool mounted? I ran into this problem frequently, until I set mountpoint to legacy. It may be that I had to destroy the filesystem afterwards, but since I stopped mounting the backup target everything runs smoothly. Nevertheless I agree it would be nice to find the root cause for this. -- Arne Edward Ned Harvey wrote:> This is the problem: > > [root at nasbackup backup-scripts]# zfs destroy > storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30 > > cannot destroy > ''storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30'': dataset > already exists > > > > This is apparently a common problem. It''s happened to me twice already, > and the third time now. Each time it happens, it''s on the "backup" > server, so fortunately, I have total freedom to do whatever I want, > including destroy the pool. > > > > The previous two times, I googled around, basically only found "destroy > the pool" as a solution, and I destroyed the pool. > > > > This time, I would like to dedicate a little bit of time and resource to > finding the cause of the problem, so hopefully this can be fixed for > future users, including myself. This time I also found "apply updates > and repeat your attempt to destroy the snapshot" ... So I applied > updates, and repeated. But no improvement. The OS was sol 10u6, but > now it?s fully updated. Problem persists. > > > > I?ve also tried exporting and importing the pool. > > > > Somebody on the Internet suspected the problem is somehow aftermath of > killing a "zfs send" or receive. This is distinctly possible, as I?m > sure that?s happened on my systems. But there is currently no send or > receive being killed ... Any such occurrence is long since past, and > even beyond reboots and such. > > > > I do not use clones. There are no clones of this snapshot anywhere, and > there never have been. > > > > I do have other snapshots, which were incrementally received based on > this one. But that shouldn''t matter, right? > > > > I have not yet called support, although we do have a support contract. > > > > Any suggestions? > > > > FYI: > > > > [root at nasbackup backup-scripts]# zfs list > > NAME USED > AVAIL REFER MOUNTPOINT > > rpool > 19.3G 126G 34K /rpool > > rpool/ROOT > 16.3G 126G 21K legacy > > rpool/ROOT/nasbackup_slash > 16.3G 126G 16.3G / > > rpool/dump 1.00G > 126G 1.00G - > > rpool/swap > 2.00G 127G 1.08G - > > storagepool 1.28T > 4.06T 34.4K /storage > > storagepool/nas-lyricpool 1.27T > 4.06T 1.13T /storage/nas-lyricpool > > storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30 > 94.1G - 1.07T - > > storagepool/nas-lyricpool at daily-2010-06-01-00-00-00 > 0 - 1.13T - > > storagepool/nas-rpool-ROOT-nas_slash 8.65G > 4.06T 8.65G /storage/nas-rpool-ROOT-nas_slash > > storagepool/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 > 0 - 8.65G - > > zfs-external1 > 1.13T 670G 24K /zfs-external1 > > zfs-external1/nas-lyricpool > 1.12T 670G 1.12T /zfs-external1/nas-lyricpool > > zfs-external1/nas-lyricpool at daily-2010-06-01-00-00-00 > 0 - 1.12T - > > zfs-external1/nas-rpool-ROOT-nas_slash > 8.60G 670G 8.60G /zfs-external1/nas-rpool-ROOT-nas_slash > > zfs-external1/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 > 0 - 8.60G - > > > > And > > > > [root at nasbackup ~]# zfs get origin > > NAME > PROPERTY VALUE SOURCE > > rpool > origin - - > > rpool/ROOT > origin - - > > rpool/ROOT/nasbackup_slash > origin - - > > rpool/dump > origin - - > > rpool/swap > origin - - > > storagepool > origin - - > > storagepool/nas-lyricpool > origin - - > > storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30 > origin - - > > storagepool/nas-lyricpool at daily-2010-06-01-00-00-00 > origin - - > > storagepool/nas-lyricpool at daily-2010-06-02-00-00-00 > origin - - > > storagepool/nas-rpool-ROOT-nas_slash > origin - - > > storagepool/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 > origin - - > > storagepool/nas-rpool-ROOT-nas_slash at daily-2010-06-02-00-00-00 > origin - - > > zfs-external1 > origin - - > > zfs-external1/nas-lyricpool > origin - - > > zfs-external1/nas-lyricpool at daily-2010-06-01-00-00-00 > origin - - > > zfs-external1/nas-rpool-ROOT-nas_slash > origin - - > > zfs-external1/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 > origin - - > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy Swearingen
2010-Jun-02 15:17 UTC
[zfs-discuss] cannot destroy ... dataset already exists
Hi Ned, If you do incremental receives, this might be CR 6860996: %temporary clones are not automatically destroyed on error A temporary clone is created for an incremental receive and in some cases, is not removed automatically. Victor might be able to describe this better, but consider the following steps as further diagnosis or a workaround: 1. Determine clone names: # zdb -d <poolname> | grep % 2. Destroy identified clones: # zfs destroy <clone-with-%-in-the-name> It will complain that ''dataset does not exist'', but you can check again(see 1) 3. Destroy snapshot(s) that could not be destroyed previously Thanks, Cindy On 06/02/10 08:42, Edward Ned Harvey wrote:> This is the problem: > > [root at nasbackup backup-scripts]# zfs destroy > storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30 > > cannot destroy > ''storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30'': dataset > already exists > > > > This is apparently a common problem. It''s happened to me twice already, > and the third time now. Each time it happens, it''s on the "backup" > server, so fortunately, I have total freedom to do whatever I want, > including destroy the pool. > > > > The previous two times, I googled around, basically only found "destroy > the pool" as a solution, and I destroyed the pool. > > > > This time, I would like to dedicate a little bit of time and resource to > finding the cause of the problem, so hopefully this can be fixed for > future users, including myself. This time I also found "apply updates > and repeat your attempt to destroy the snapshot" ... So I applied > updates, and repeated. But no improvement. The OS was sol 10u6, but > now it?s fully updated. Problem persists. > > > > I?ve also tried exporting and importing the pool. > > > > Somebody on the Internet suspected the problem is somehow aftermath of > killing a "zfs send" or receive. This is distinctly possible, as I?m > sure that?s happened on my systems. But there is currently no send or > receive being killed ... Any such occurrence is long since past, and > even beyond reboots and such. > > > > I do not use clones. There are no clones of this snapshot anywhere, and > there never have been. > > > > I do have other snapshots, which were incrementally received based on > this one. But that shouldn''t matter, right? > > > > I have not yet called support, although we do have a support contract. > > > > Any suggestions? > > > > FYI: > > > > [root at nasbackup backup-scripts]# zfs list > > NAME USED > AVAIL REFER MOUNTPOINT > > rpool > 19.3G 126G 34K /rpool > > rpool/ROOT > 16.3G 126G 21K legacy > > rpool/ROOT/nasbackup_slash > 16.3G 126G 16.3G / > > rpool/dump 1.00G > 126G 1.00G - > > rpool/swap > 2.00G 127G 1.08G - > > storagepool 1.28T > 4.06T 34.4K /storage > > storagepool/nas-lyricpool 1.27T > 4.06T 1.13T /storage/nas-lyricpool > > storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30 > 94.1G - 1.07T - > > storagepool/nas-lyricpool at daily-2010-06-01-00-00-00 > 0 - 1.13T - > > storagepool/nas-rpool-ROOT-nas_slash 8.65G > 4.06T 8.65G /storage/nas-rpool-ROOT-nas_slash > > storagepool/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 > 0 - 8.65G - > > zfs-external1 > 1.13T 670G 24K /zfs-external1 > > zfs-external1/nas-lyricpool > 1.12T 670G 1.12T /zfs-external1/nas-lyricpool > > zfs-external1/nas-lyricpool at daily-2010-06-01-00-00-00 > 0 - 1.12T - > > zfs-external1/nas-rpool-ROOT-nas_slash > 8.60G 670G 8.60G /zfs-external1/nas-rpool-ROOT-nas_slash > > zfs-external1/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 > 0 - 8.60G - > > > > And > > > > [root at nasbackup ~]# zfs get origin > > NAME > PROPERTY VALUE SOURCE > > rpool > origin - - > > rpool/ROOT > origin - - > > rpool/ROOT/nasbackup_slash > origin - - > > rpool/dump > origin - - > > rpool/swap > origin - - > > storagepool > origin - - > > storagepool/nas-lyricpool > origin - - > > storagepool/nas-lyricpool at nasbackup-2010-05-14-15-56-30 > origin - - > > storagepool/nas-lyricpool at daily-2010-06-01-00-00-00 > origin - - > > storagepool/nas-lyricpool at daily-2010-06-02-00-00-00 > origin - - > > storagepool/nas-rpool-ROOT-nas_slash > origin - - > > storagepool/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 > origin - - > > storagepool/nas-rpool-ROOT-nas_slash at daily-2010-06-02-00-00-00 > origin - - > > zfs-external1 > origin - - > > zfs-external1/nas-lyricpool > origin - - > > zfs-external1/nas-lyricpool at daily-2010-06-01-00-00-00 > origin - - > > zfs-external1/nas-rpool-ROOT-nas_slash > origin - - > > zfs-external1/nas-rpool-ROOT-nas_slash at daily-2010-06-01-00-00-00 > origin - - > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Edward Ned Harvey
2010-Jun-03 12:51 UTC
[zfs-discuss] cannot destroy ... dataset already exists
> From: Cindy Swearingen [mailto:cindy.swearingen at oracle.com] > > A temporary clone is created for an incremental receive and > in some cases, is not removed automatically. > > 1. Determine clone names: > # zdb -d <poolname> | grep % > > 2. Destroy identified clones: > # zfs destroy <clone-with-%-in-the-name> > > 3. Destroy snapshot(s) that could not be destroyed previouslyCindy, you are amazing! :-D That was absolutely it. Thank you. Now I''m off to update google... ;-)