John Balestrini
2010-May-15 18:52 UTC
[zfs-discuss] Unable to Destroy One Particular Snapshot
Howdy All, I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however existing shells will occasionally provide some interaction, but at an obscenely slow rate, if at all. Also, since this snapshot is unusually large in size so I''ve allowed the server to chew on it over night without success. It set up as a 3-disk raidz configuration. Here''s a abbreviated listing of it (the pool1/Staff at 20100421.0800 snapshot is the one that is misbehaving): basie@~$ zfs list -t filesystem,snapshot -r pool1/Staff NAME USED AVAIL REFER MOUNTPOINT pool1/Staff 741G 760G 44.6K /export/Staff pool1/Staff at 20100421.0800 366G - 366G - pool1/Staff at zfs-auto-snap:daily-2010-05-02-21:57 0 - 44.6K - pool1/Staff at zfs-auto-snap:monthly-2010-05-02-21:58 0 - 44.6K - pool1/Staff at zfs-auto-snap:daily-2010-05-03-00:00 0 - 44.6K - . . (Lines Removed) . pool1/Staff at Archive_1.20100514.2218 0 - 44.6K - pool1/Staff at zfs-auto-snap:daily-2010-05-15-00:00 0 - 44.6K - Has anyone seen this type of problem? Any ideas? Thanks, -- John
Roy Sigurd Karlsbakk
2010-May-15 19:34 UTC
[zfs-discuss] Unable to Destroy One Particular Snapshot
> Has anyone seen this type of problem? Any ideas?Yeah, I''ve seen the same. Tried to remove a dataset, and it hung on one snapshot. This was a test server, so I ended up recreating the pool instead of trying to report a bug about it (hours of time saved, since the debugging features for ZFS all involve serial consoles, which did not work, and takes a lot of time debugging). Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
On 05/16/10 06:52 AM, John Balestrini wrote:> Howdy All, > > I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however existing shells will occasionally provide some interaction, but at an obscenely slow rate, if at all. Also, since this snapshot is unusually large in size so I''ve allowed the server to chew on it over night without success. > > It set up as a 3-disk raidz configuration. Here''s a abbreviated listing of it (the pool1/Staff at 20100421.0800 snapshot is the one that is misbehaving): > >Is dedup enabled on that pool? -- Ian.
John Balestrini
2010-May-16 00:40 UTC
[zfs-discuss] Unable to Destroy One Particular Snapshot
Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was imagining that the large ratio was tied to that particular snapshot. basie@/root# zpool list pool1 NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT pool1 2.72T 1.55T 1.17T 57% 1.50x ONLINE - So, it is possible to turn dedup off? More importantly, what happens when I try? Thanks -- John On May 15, 2010, at 3:40 PM, Ian Collins wrote:> On 05/16/10 06:52 AM, John Balestrini wrote: >> Howdy All, >> >> I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however existing shells will occasionally provide some interaction, but at an obscenely slow rate, if at all. Also, since this snapshot is unusually large in size so I''ve allowed the server to chew on it over night without success. >> >> It set up as a 3-disk raidz configuration. Here''s a abbreviated listing of it (the pool1/Staff at 20100421.0800 snapshot is the one that is misbehaving): >> >> > Is dedup enabled on that pool? > > -- > Ian. >
On 05/16/10 12:40 PM, John Balestrini wrote:> Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was imagining that the large ratio was tied to that particular snapshot. > > basie@/root# zpool list pool1 > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > pool1 2.72T 1.55T 1.17T 57% 1.50x ONLINE - > > So, it is possible to turn dedup off? More importantly, what happens when I try? > >How is your pool configured? If you don''t have plenty of RAM or a cache device it may take an awfully long time to delete a large snapshot. Run "zpool iostat <your pool> 30" and see what you get. If you see a lot of read activity, expect a long wait! -- Ian.
Roy Sigurd Karlsbakk
2010-May-16 09:44 UTC
[zfs-discuss] Unable to Destroy One Particular Snapshot
----- "John Balestrini" <john at balestrini.net> skrev:> Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was > imagining that the large ratio was tied to that particular snapshot. > > basie@/root# zpool list pool1 > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > pool1 2.72T 1.55T 1.17T 57% 1.50x ONLINE - > > So, it is possible to turn dedup off? More importantly, what happens > when I try?# zfs set dedup=off pool1 This will not dedup your data, though, unless you do something like copying all the files again, since dedup is done on write. Some seems to be fixed in 135, and it was said here on the list that all known bugs should be fixed before the next release (see my thread ''dedup status'') roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
Roy Sigurd Karlsbakk
2010-May-16 09:46 UTC
[zfs-discuss] Unable to Destroy One Particular Snapshot
----- "Roy Sigurd Karlsbakk" <roy at karlsbakk.net> skrev:> ----- "John Balestrini" <john at balestrini.net> skrev: > > > Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was > > imagining that the large ratio was tied to that particular snapshot. > > > > basie@/root# zpool list pool1 > > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > > pool1 2.72T 1.55T 1.17T 57% 1.50x ONLINE - > > > > So, it is possible to turn dedup off? More importantly, what happens > > when I try? > > # zfs set dedup=off pool1 > > This will not dedup your data, though, unless you do something like > copying all the files again, since dedup is done on write. Some seems > to be fixed in 135, and it was said here on the list that all known > bugs should be fixed before the next release (see my thread ''dedup > status'')I meant, this will not de-dedup the deduped data... roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
John J Balestrini
2010-May-16 14:37 UTC
[zfs-discuss] Unable to Destroy One Particular Snapshot
Finally, it''s been destroyed! Last night I turned dedup off and sent the destroy command and simply let it run over night. Also, I let zpool iostat 30 run as well. It showed no activity for the first 6-1/2 hours and then a flurry of activity for 13-minutes. That snapshot is finally gone and the system seems to be behaving now. Thanks for all your help! John On May 16, 2010, at 2:46 AM, Roy Sigurd Karlsbakk wrote:> ----- "Roy Sigurd Karlsbakk" <roy at karlsbakk.net> skrev: > >> ----- "John Balestrini" <john at balestrini.net> skrev: >> >>> Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was >>> imagining that the large ratio was tied to that particular snapshot. >>> >>> basie@/root# zpool list pool1 >>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>> pool1 2.72T 1.55T 1.17T 57% 1.50x ONLINE - >>> >>> So, it is possible to turn dedup off? More importantly, what happens >>> when I try? >> >> # zfs set dedup=off pool1 >> >> This will not dedup your data, though, unless you do something like >> copying all the files again, since dedup is done on write. Some seems >> to be fixed in 135, and it was said here on the list that all known >> bugs should be fixed before the next release (see my thread ''dedup >> status'') > > I meant, this will not de-dedup the deduped data... > > roy > -- > Roy Sigurd Karlsbakk > (+47) 97542685 > roy at karlsbakk.net > http://blogg.karlsbakk.net/ > -- > I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss