Displaying 20 results from an estimated 50000 matches similar to: "Undo "zfs destroy -r""
2011 Jun 26
2
recovering from "zfs destroy -r"
Hi,
Is there a simple way of rolling back to a specific TXG of a volume to recover from such a situation?
Many thanks,
Przem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110627/9b1c5a85/attachment.html>
2010 May 05
0
zfs destroy -f and dataset is busy?
We have a pair of opensolaris systems running snv_124. Our main zpool
''z'' is running ZFS pool version 18.
Problem:
#zfs destroy -f z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00
cannot destroy ''z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00'':
dataset is busy
I have tried:
Unable to destroy numerous datasets even with a -f option.
2010 Mar 18
2
lazy zfs destroy
OK I have a very large zfs snapshot I want to destroy. When I do this, the system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with 128GB of memory. Now this may be more of a function of the IO device, but let''s say I don''t care that this zfs destroy finishes quickly. I actually don''t care, as long as it finishes before I run out of disk space.
So a
2008 Jul 17
4
RFE: -t flag for ''zfs destroy''
I would like to request an additional flag for the command line zfs
tools. Specifically, I''d like to have a -t flag for "zfs destroy", as
shown below. Suppose I have a pool "home" with child filesystem
"will", and a snapshot "home/will at yesterday". Then I run the
following commands:
# zfs destroy -t volume home/will at yesterday
zfs: not
2012 Sep 28
2
Failure to zfs destroy - after interrupting zfs receive
Formerly, if you interrupted a zfs receive, it would leave a clone with a % in its name, and you could find it via "zdb -d" and then you could destroy the clone, and then you could destroy the filesystem you had interrupted receiving.
That was considered a bug, and it was fixed, I think by Sun. If the lingering clone was discovered laying around, zfs would automatically destroy it.
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all,
I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2006 Aug 22
1
Interesting zfs destroy failure
Saw this while writing a script today -- while debugging the script, I was ctrl-c-ing it a lot rather
than wait for the zfs create / zfs set commands to complete. After doing so, my cleanup script
failed to zfs destroy the new filesystem:
root at kronos:/ # zfs destroy -f raid/www/user-testuser
cannot unshare ''raid/www/user-testuser'': /raid/www/user-testuser: not shared
root
2009 Dec 27
7
How to destroy your system in funny way with ZFS
Hi all,
I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows because snv_130 doesn''t boot anymore after installation of VirtualBox guest additions. Older builds before snv_129 were running fine too. I like some features of this OS, but now I end with something funny.
I installed default snv_129, installed guest additions -> reboot, set
2007 Apr 01
8
zfs destroy <snapshot> takes hours
Hello,
I am having a problem destroying zfs snapshots. The machine is almost not responding for more than 4 hours, after I started the command and I can''t run anything else during that time -
I get (bash): fork: Resource temporarily unavailable - errors.
The machine is still responding somewhat, but very, very slow.
It is: P4, 2.4 GHz with 512 MB RAM, 8 x 750 GB disks as raidZ,
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote:
> Brent,
>
> I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue.
>
> The other issue I noticed is that, as opposed to the
2008 May 20
7
[Bug 1986] New: ''zfs destroy'' hangs on encrypted dataset
http://defect.opensolaris.org/bz/show_bug.cgi?id=1986
Summary: ''zfs destroy'' hangs on encrypted dataset
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other
2010 Jan 12
0
ZFS auto-snapshot in zone
Hello,
I''ve got auto-snapshots enabled in global zone for home directories of all users. Users log in to their individual zones and home directory loads from global zone. All works fine, except that new auto-snapshot have no properties and therefore can''t be accessed in zones.
example from zone:
[~/.zfs/snapshot]:$ ls -Alh
ls: cannot access
2006 Jun 12
3
zfs destroy - destroying a snapshot
Hello zfs-discuss,
I''m writing a script to do automatically snapshots and destroy old
one. I think it would be great to add to zfs destroy another option
so only snapshots can be destroyed. Something like:
zfs destroy -s SNAPSHOT
so if something other than snapshot is provided as an argument
zfs destroy wouldn''t actually destroy it.
That way it would
2009 Nov 17
1
upgrading to the latest zfs version
Hi guys, after reading the mailings yesterday i noticed someone was after upgrading to zfs v21 (deduplication) i''m after the same, i installed osol-dev-127 earlier which comes with v19 and then followed the instructions on http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to date, however, the system is reporting no updates are available and stays at zfs v19, any ideas?
2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi,
I was suffering for weeks from the following problem:
a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool.
''zfs destroy -r pool/dataset''
hung the machine within seconds
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334
Summary: zpool destroy panics after zfs_force_umount_stress
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other
AssignedTo:
2005 Nov 27
1
how to get ''zfs mount -a'' in some order?
I have several filesystems created from the pool; and some
mountpoints are inside the others, e.g.:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nfsv4pool 15.5M 33.5G 98.5K /nfsv4pool
nfsv4pool/KRB5_FS 1.87M 33.5G 1.87M /export/test/KRB5_FS
nfsv4pool/NOSPC_FS 3.01M 0 3.01M /export/test/NoSPC_FS
nfsv4pool/NOTSHARE_FS 105K 33.5G 105K
2009 Jun 05
4
Recover ZFS destroyed dataset?
I was asked by a coworker about recovering destroyed datasets on ZFS - and
whether it is possible at all? As a related question, if a filesystem dataset was
recursively destroyed along with all its snapshots, is there some means to at
least find some pointers whether it existed at all?
I remember "zpool import -D" can be used to import whole destroyed pools.
But crawling around the
2010 Jul 23
2
ZFS volume turned into a socket - any way to restore data?
I have recently upgraded from NexentaStor 2 to NexentaStor 3 and somehow one of my volumes got corrupted. Its showing up as a socket. Has anyone seen this before? Is there a way to get my data back? It seems like it''s still there, but not recognized as a folder. I ran zpool scrub, but it came back clean.
Attached is the output of #zdb data/rt
2.0K sr-xr-xr-x 17 root root 17 Jul
2010 Apr 27
2
ZFS version information changes (heads up)
Hi everyone,
Please review the information below regarding access to ZFS version
information.
Let me know if you have questions.
Thanks,
Cindy
CR 6898657:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6898657
ZFS commands zpool upgrade -v and zfs upgrade -v refer to URLs that
are no longer redirected to the correct location after April 30, 2010.
Description
The