We''re getting the notorious "cannot destroy ... dataset already exists". I''ve seen a number of reports of this, but none of the reports seem to get any response. Fortunately this is a backup system, so I can recreate the pool, but it''s going to take me several days to get all the data back. Is there any known workaround? -- This message posted from opensolaris.org
Incidentally, this is on Solaris 10, but I''ve seen identical reports from Opensolaris. -- This message posted from opensolaris.org
On Mar 31, 2010, at 7:51 AM, Charles Hedrick wrote:> We''re getting the notorious "cannot destroy ... dataset already exists". I''ve seen a number of reports of this, but none of the reports seem to get any response. Fortunately this is a backup system, so I can recreate the pool, but it''s going to take me several days to get all the data back. Is there any known workaround?Charles, Can you ''zpool export'' and ''zpool import'' the pool, and then try destroying the snapshot again? -Chris
On 31-3-2010 14:52, Charles Hedrick wrote:> Incidentally, this is on Solaris 10, but I''ve seen identical reports from Opensolaris. >Probably you need to delete any existing view over the lun you want to destroy. Example : stmfadm list-lu LU Name: 600144F0B673400000004BB31F060001 stmfadm list-view -l 600144F0B673400000004BB323FF0003 View Entry: 0 Host group : TEST Target group : All LUN : 1 stmfadm remove-view -l 600144F0B673400000004BB323FF0003 after this, i think you can zfs destroy zfs_volume . Bruno -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3656 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100331/07d7a916/attachment.bin>
On 04/ 1/10 01:51 AM, Charles Hedrick wrote:> We''re getting the notorious "cannot destroy ... dataset already exists". I''ve seen a number of reports of this, but none of the reports seem to get any response. Fortunately this is a backup system, so I can recreate the pool, but it''s going to take me several days to get all the data back. Is there any known workaround? >Exactly what commands are you running and what errors do you see? -- Ian.
# zfs destroy -r OIRT_BAK/backup_bad cannot destroy ''OIRT_BAK/backup_bad at annex-2010-03-23-07:04:04-bad'': dataset already exists No, there are no clones. -- This message posted from opensolaris.org
So we tried recreating the pool and sending the data again. 1) compression wasn''t set on the copy, even though I did sent -R, which is supposed to send all properties 2) I tried killing to send | receive pipe. Receive couldn''t be killed. It hung. 3) This is Solaris Cluster. We tried forcing a failover. The pool mounted on the other server without dismounting on the first. zpool list showed it mounted on both machines. zpool iostat showed I/O actually occurring on both systems. Altogether this does not give me a good feeling about ZFS. I''m hoping the problem is just with receive and CLuster, and the it works properly on a single system. Because i''m running a critical database on ZFS on another system. -- This message posted from opensolaris.org
On 04/ 1/10 02:01 PM, Charles Hedrick wrote:> So we tried recreating the pool and sending the data again. > > 1) compression wasn''t set on the copy, even though I did sent -R, which is supposed to send all properties > 2) I tried killing to send | receive pipe. Receive couldn''t be killed. It hung. >How long did you wait and how much data had been sent? Killing a receive can take a (long!) while if it has to free all data already written. -- Ian.
Ah, I hadn''t thought about that. That may be what was happening. Thanks. -- This message posted from opensolaris.org
So that eliminates one of my concerns. However the other one is still an issue. Presumably Solaris Cluster shouldn''t import a pool that''s still active on the other system. We''ll be looking more carefully into that. -- This message posted from opensolaris.org
On 04/ 1/10 02:01 PM, Charles Hedrick wrote:> So we tried recreating the pool and sending the data again. > > 1) compression wasn''t set on the copy, even though I did sent -R, which is supposed to send all properties >Was compression explicitly set on the root filesystem of your set? I don''t think compression will be on if the root of a sent filesystem tree inherits the property from its parent. I normally set compression on the the pool, then explicitly off on an any filesystems where it isn''t appropriate. -- Ian.
On Mar 31, 2010, at 7:57 PM, Charles Hedrick wrote:> So that eliminates one of my concerns. However the other one is still an issue. Presumably Solaris Cluster shouldn''t import a pool that''s still active on the other system. We''ll be looking more carefully into that.Older releases of Solaris Cluster used SCSI reservations to help prevent such things. However, that is now tunable :-( Did you tune it? -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
On Wed, 31 Mar 2010, Charles Hedrick wrote:> 3) This is Solaris Cluster. We tried forcing a failover. The pool > mounted on the other server without dismounting on the first. zpool > list showed it mounted on both machines. zpool iostat showed I/O > actually occurring on both systems.This is a good way to permanently toast your whole pool. It is so scary, that I would hesitate to build such a system without something mechanical (some sort of a switch) enforcing that only one system has access to the pool at once. Even then, it should be necessary for the pool to be explicitly imported on the standby system. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On 01/04/2010 15:24, Richard Elling wrote:> On Mar 31, 2010, at 7:57 PM, Charles Hedrick wrote: > > >> So that eliminates one of my concerns. However the other one is still an issue. Presumably Solaris Cluster shouldn''t import a pool that''s still active on the other system. We''ll be looking more carefully into that. >> > Older releases of Solaris Cluster used SCSI reservations to help > prevent such things. However, that is now tunable :-( Did you tune it? >scsi reservation is used only if a node left a cluster. so for example in a two-node cluster when both nodes are part of a cluster both of them have a full access to shared storage and you can force zpool import on both nodes at the same time. When you think about it you need actually such behavior for RAC to work on raw devices or real cluster volumes or filesystems, etc. -- Robert Milkowski http://milek.blogspot.com
On 01/04/2010 02:01, Charles Hedrick wrote:> So we tried recreating the pool and sending the data again. > > 1) compression wasn''t set on the copy, even though I did sent -R, which is supposed to send all properties > 2) I tried killing to send | receive pipe. Receive couldn''t be killed. It hung. > 3) This is Solaris Cluster. We tried forcing a failover. The pool mounted on the other server without dismounting on the first. zpool list showed it mounted on both machines. zpool iostat showed I/O actually occurring on both systems. > > Altogether this does not give me a good feeling about ZFS. I''m hoping the problem is just with receive and CLuster, and the it works properly on a single system. Because i''m running a critical database on ZFS on another system. >1. you shouldn''t allow for a pool to be imported on more than one node at a time, if you do you will probably loose entire pool 2. if you have a pool under a cluster control and you want to manually import make sure that you do it in such an order: - disable hastorageplus resource which manages the pool - suspend a resource group so cluster won''t start a storage resource in any event - manually import a pool and do whatever you need to do with it - however to be on a safe side import it with -R / option so if your node would reboot for some reason the pool won''t be automatically imported - after you are done with whatever you wanted to do make sure you export the pool, resume the resource group and enable the storage resource The other approach is to keep a pool under a cluster management but eventually suspend a resource group so there won''t be any unexpected failovers (but it really depends on circumstances and what you are trying to do). -- Robert Milkowski http://milek.blogspot.com