similar to: what happens to the deduptable (DDT) when you set dedup=off ???

Displaying 20 results from an estimated 2000 matches similar to: "what happens to the deduptable (DDT) when you set dedup=off ???"

2010 Sep 25
4
dedup testing?
Hi all Has anyone done any testing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering... I''ll get a 10TB test box released for testing OI in a few weeks, but before
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings: zpool create data c8t1d0 zfs create data/shared zfs set dedup=on data/shared The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x. -- This
2011 Apr 28
4
Finding where dedup''d files are
Is there an easy way to find out what datasets have dedup''d data in them. Even better would be to discover which files in a particular dataset are dedup''d. I ran # zdb -DDDD which gave output like: index 1055c9f21af63 refcnt 2 single DVA[0]=<0:1e274ec3000:2ac00:STD:1> [L0 deduplicated block] sha256 uncompressed LE contiguous unique unencrypted 1-copy size=20000L/20000P
2012 Jul 30
10
encfs on top of zfs
Dear ZFS-Users, I want to switch to ZFS, but still want to encrypt my data. Native Encryption for ZFS was added in "ZFS Pool Version Number 30<http://en.wikipedia.org/wiki/ZFS#Release_history>", but I''m using ZFS on FreeBSD with Version 28. My question is how would encfs (fuse encryption) affect zfs specific features like data Integrity and deduplication? Regards
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a
2011 Feb 12
1
existing performance data for on-disk dedup?
Hello. I am looking to see if performance data exists for on-disk dedup. I am currently in the process of setting up some tests based on input from Roch, but before I get started, thought I''d ask here. Thanks for the help, Janice
2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi, I was suffering for weeks from the following problem: a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool. ''zfs destroy -r pool/dataset'' hung the machine within seconds
2010 Mar 02
2
dedup source code
Hello ZFS experts: I would like to study ZFS de-duplication feature. Can someone please let me know which directory/files I should be looking at? Thanks in advance. -- This message posted from opensolaris.org
2010 Jun 18
1
Question : Sun Storage 7000 dedup ratio per share
Dear All : Under Sun Storage 7000 system, can we see per share ratio after enable dedup function ? We would like deep to see each share dedup ratio. On Web GUI, only show dedup ratio entire storage pool. Thanks a lot, -- Rex -- This message posted from opensolaris.org
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
Hello all, I am not sure my original mail got through to the list (I haven''t received it back), so I attach it below. Anyhow, now I have a saved kernel crash dump of the system panicking when it tries to - I believe - deferred-release the corrupted deduped blocks which are no longer referenced by the userdata/blockpointer tree. As I previously wrote in my thread on unfixeable
2013 Jun 26
6
[PROGS PATCH] Import btrfs-extent-same
Originally from https://github.com/markfasheh/duperemove/blob/master/btrfs-extent-same.c Signed-off-by: Gabriel de Perthuis <g2p.code+btrfs@gmail.com> --- .gitignore | 1 + Makefile | 2 +- btrfs-extent-same.c | 145 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 147 insertions(+), 1 deletion(-) create mode 100644 btrfs-extent-same.c diff
2010 Feb 01
0
quick overhead sizing for DDT and L2ARC
Two related questions: - given an existing pool with dedup''d data, how can I find the current size of the DDT? I presume some zdb work to find and dump the relevant object, but what specifically? - what''s the expansion ratio for the memory overhead of L2ARC entries? If I know my DDT can fit on a ssd of size X, that''s good - but how much RAM do I need
2013 Apr 01
5
[RFC] Online dedup for Btrfs
Hello, I was bored this weekend so I hacked up online dedup for Btrfs. It''s working quite well so I think it can be more widely tested. There are two ways to use it 1) Compatible mode - this is a bit slower but will handle being used by older kernels. We use the csum tree to find duplicate blocks. Since it is relatively easy to have crc32c collisions this also involves reading the
2009 Jun 16
2
dedup in dovecot?
Hi all Deduplicating data is not really a new thing, but quite efficient in mail systems where an email with an nMB attachment may be sent to multiple recipients. This might call for deduplicating data. Is there a way to do this, or is it far off? If I understand the system correctly, usually an MTA is calling dovecot on every single message, meaning the message itself won't
2009 Aug 18
1
How to Dedup a Spatial Points Data Set
I'm new to spatial analysis and am exploring numerous packages, mostly enjoying sp, gstat, and spBayes. Is there a function that allows the user to dedup a data set with multiple values at the same coordinates and replace those duplicated values with the mean at those coordinates? I've written some cumbersome code that works, but would prefer an efficient R function if it exists.
2010 Aug 18
10
Networker & Dedup @ ZFS
Hi, We are considering using a ZFS based storage as a staging disk for Networker. We''re aiming at providing enough storage to be able to keep 3 months worth of backups on disk, before it''s moved to tape. To provide storage for 3 months of backups, we want to utilize the dedup functionality in ZFS. I''ve searched around for these topics and found no success stories,
2010 Dec 08
5
very slow boot: stuck at mounting zfs filesystems
Hello list, I''m having trouble with a server holding a lot of data. After a few months of uptime, it is currently rebooting from a lockup (reason unknown so far) but it is taking hours to boot up again. The boot process is stuck at the stage where it says: mounting zfs filesystems (1/5) the machine responds to pings and keystrokes. I can see disk activity; the disk leds blink one after
2011 Jan 18
4
Zpool Import Hanging
Hi All, I believe this has been asked before, but I wasn?t able to find too much information about the subject. Long story short, I was moving data around on a storage zpool of mine and a zfs destroy <filesystem> hung (or so I thought). This pool had dedup turned on at times while imported as well; it?s running on a Nexenta Core 3.0.1 box (snv_134f). The first time the machine was
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote: > Brent, > > I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue. > > The other issue I noticed is that, as opposed to the
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release. Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134? These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with