similar to: zfs destroy snapshot: dataset already exists?

Displaying 20 results from an estimated 3000 matches similar to: "zfs destroy snapshot: dataset already exists?"

2010 Jan 24
4
zfs streams
Can I send a zfs send stream (ZFS pool version 22 ; ZFS filesystem version 4) to a zfs receive stream on Solaris 10 (ZFS pool version 15 ; ZFS filesystem version 4)? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.03 b131 + All that''s really worth doing is what we do for others (Lewis Carrol)
2009 Dec 27
7
How to destroy your system in funny way with ZFS
Hi all, I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows because snv_130 doesn''t boot anymore after installation of VirtualBox guest additions. Older builds before snv_129 were running fine too. I like some features of this OS, but now I end with something funny. I installed default snv_129, installed guest additions -> reboot, set
2003 Aug 05
0
Security-officer PGP Key?
On 2003.08.05 12:18:04 -0700, Dave Tweten wrote: > I just received a PGP signed message, supposedly from > security-officer@freebsd.org, for which I did not have the matching public > key. Reflexively, I fetched it, and then began looking into it with an > eye toward signing it so PGP would no longer call it "untrusted." > > To my shock, I found I had two public
2010 May 05
0
zfs destroy -f and dataset is busy?
We have a pair of opensolaris systems running snv_124. Our main zpool ''z'' is running ZFS pool version 18. Problem: #zfs destroy -f z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00 cannot destroy ''z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00'': dataset is busy I have tried: Unable to destroy numerous datasets even with a -f option.
2010 Sep 30
3
Cannot destroy snapshots: dataset does not exist
Hello, I have a ZFS filesystem (zpool version 26 on Nexenta CP 3.01) which I''d like to rollback but it''s having an existential crisis. Here''s what I see: root at bambi:/# zfs rollback bambi/faline/userdocs at AutoD-2010-09-28 cannot rollback to ''bambi/faline/userdocs at AutoD-2010-09-28'': more recent snapshots exist use ''-r'' to
2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi, I was suffering for weeks from the following problem: a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool. ''zfs destroy -r pool/dataset'' hung the machine within seconds
2008 May 20
7
[Bug 1986] New: ''zfs destroy'' hangs on encrypted dataset
http://defect.opensolaris.org/bz/show_bug.cgi?id=1986 Summary: ''zfs destroy'' hangs on encrypted dataset Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: other
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a
2020 Jun 17
1
Deduplication and block size
Nothing too interesting here, I was just playing around with the idea of a deduplication allocator for nbdkit (“allocator=dedup”, see https://rwmj.wordpress.com/2020/06/15/compressed-ram-disks/). Before implementing such a thing I wanted to know if there's much duplicated structure in a disk image. It seems to depend very critically on the block size, but also there are no significant
2010 Jun 18
1
Question : Sun Storage 7000 dedup ratio per share
Dear All : Under Sun Storage 7000 system, can we see per share ratio after enable dedup function ? We would like deep to see each share dedup ratio. On Web GUI, only show dedup ratio entire storage pool. Thanks a lot, -- Rex -- This message posted from opensolaris.org
2011 Jun 29
0
SandForce SSD internal dedup
This article raises the concern that SSD controllers (in particular SandForce) do internal dedup, and in particular that this could defeat ditto-block style replication of critical metadata as done by filesystems including ZFS. http://storagemojo.com/2011/06/27/de-dup-too-much-of-good-thing/ Along with discussion of risk evaluation, it also suggests that filesystems could vary each copy in some
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen, I''m currently testing a new setup for a ZFS based storage system with dedup enabled. The system is setup on OI 148, which seems quite stable w/ dedup enabled (compared to the OpenSolaris snv_136 build I used before). One issue I ran into, however, is quite baffling: With iozone set to 32 threads, ZFS''s ARC seems to consume all available memory, making
2009 Nov 16
2
ZFS Deduplication Replication
Hello; Dedup on ZFS is an absolutely wonderful feature! Is there a way to conduct dedup replication across boxes from one dedup ZFS data set to another? Warmest Regards Steven Sim
2010 May 15
7
Unable to Destroy One Particular Snapshot
Howdy All, I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however
2011 Apr 28
4
Finding where dedup''d files are
Is there an easy way to find out what datasets have dedup''d data in them. Even better would be to discover which files in a particular dataset are dedup''d. I ran # zdb -DDDD which gave output like: index 1055c9f21af63 refcnt 2 single DVA[0]=<0:1e274ec3000:2ac00:STD:1> [L0 deduplicated block] sha256 uncompressed LE contiguous unique unencrypted 1-copy size=20000L/20000P
2009 Aug 18
1
How to Dedup a Spatial Points Data Set
I'm new to spatial analysis and am exploring numerous packages, mostly enjoying sp, gstat, and spBayes. Is there a function that allows the user to dedup a data set with multiple values at the same coordinates and replace those duplicated values with the mean at those coordinates? I've written some cumbersome code that works, but would prefer an efficient R function if it exists.
2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that''s already stored to add compression and now dedup seems to be a send / receive pipe similar to: zfs send -R <old fs>@snap | zfs recv -d <new fs> However, according to the man page,
2006 Aug 14
1
rake test:units -> table already exists
Hi! When I run test:units, rails complains about the tables already being present. They had to be present so I could run individual unit tests. So why doesn''t rails just drop the tables if they exist? Any ideas?? Jeroen debug oupt below: >rake test:units --trace (in ...) ** Invoke test:units (first_time) ** Invoke db:test:prepare (first_time) ** Invoke environment (first_time)
2006 Jul 03
0
Check if row already exists?
I''m working on a system that has a series od "bids" associated with "posts". Vurrently when a bid is placed I have it creating a new row each time in the "bids" table. The bids table has id, post_id, and user_id. The logic is to check to see if a bid already exists with a given post_id and user_id. If it does, then just update the "amount"
2004 Oct 14
1
shorewall-2.1.11 / iptables -N net_frwd iptables: Chain already exists
Skipped content of type multipart/mixed-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://lists.shorewall.net/pipermail/shorewall-devel/attachments/20041014/45aef157/attachment-0001.bin