Displaying 20 results from an estimated 2000 matches similar to: "SandForce SSD internal dedup"
2011 Jul 09
3
btrfs vs data deduplication
Hello,
I''ve stumbled upon this article:
http://storagemojo.com/2011/06/27/de-dup-too-much-of-good-thing/
Reportedly Sandforce SF1200 SSD controller does internally block-level
data de-duplication. This effectively removes the additional
protection given by writing multiple metadata copies. This technique
may be used, or can be used in the future by manufactureres of other
drives too.
I
2010 Aug 03
2
When is the L2ARC refreshed if on a separate drive?
I''m running a mirrored pair of 2 TB SATA drives as my data storage drives on my home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to the single mirrored pair. I''m running B134, with ZFS pool version 22, with dedup enabled. If I understand correctly, the dedup table should be in
2010 Jun 18
1
Question : Sun Storage 7000 dedup ratio per share
Dear All :
Under Sun Storage 7000 system, can we see per share ratio after enable dedup function ? We would like deep to see each share dedup ratio.
On Web GUI, only show dedup ratio entire storage pool.
Thanks a lot,
-- Rex
--
This message posted from opensolaris.org
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen,
I''m currently testing a new setup for a ZFS based storage system with
dedup enabled. The system is setup on OI 148, which seems quite stable
w/ dedup enabled (compared to the OpenSolaris snv_136 build I used
before).
One issue I ran into, however, is quite baffling:
With iozone set to 32 threads, ZFS''s ARC seems to consume all available
memory, making
2011 Apr 28
4
Finding where dedup''d files are
Is there an easy way to find out what datasets have dedup''d data in
them. Even better would be to discover which files in a particular
dataset are dedup''d.
I ran
# zdb -DDDD
which gave output like:
index 1055c9f21af63 refcnt 2 single DVA[0]=<0:1e274ec3000:2ac00:STD:1>
[L0 deduplicated block] sha256 uncompressed LE contiguous unique
unencrypted 1-copy size=20000L/20000P
2009 Aug 18
1
How to Dedup a Spatial Points Data Set
I'm new to spatial analysis and am exploring numerous packages, mostly
enjoying sp, gstat, and spBayes.
Is there a function that allows the user to dedup a data set with multiple
values at the same coordinates and replace those duplicated values with the
mean at those coordinates? I've written some cumbersome code that works,
but would prefer an efficient R function if it exists.
2009 Dec 30
3
what happens to the deduptable (DDT) when you set dedup=off ???
I tried the deduplication feature but the performance of my fileserver
dived from writing 50MB/s via CIFS to 4MB/s.
what happens to the deduped blocks when you set dedup=off?
are they written back to disk?
is the deduptable deleted or is it still there?
thanks
--
This message posted from opensolaris.org
2009 Jun 16
2
dedup in dovecot?
Hi all
Deduplicating data is not really a new thing, but quite efficient in
mail systems where an email with an nMB attachment may be sent to
multiple recipients. This might call for deduplicating data. Is there
a way to do this, or is it far off? If I understand the system
correctly, usually an MTA is calling dovecot on every single message,
meaning the message itself won't
2010 Sep 25
4
dedup testing?
Hi all
Has anyone done any testing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering...
I''ll get a 10TB test box released for testing OI in a few weeks, but before
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared
The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x.
--
This
2011 Feb 12
1
existing performance data for on-disk dedup?
Hello. I am looking to see if performance data exists for on-disk
dedup. I am currently in the process of setting up some tests based on
input from Roch, but before I get started, thought I''d ask here.
Thanks for the help,
Janice
2013 Nov 26
0
Dedup on read-only snapshots
According to https://github.com/g2p/bedup/tree/wip/dedup-syscall
"The clone call is considered a write operation and won''t work on
read-only snapshots."
Is this fixed on newer kernels?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at
2012 May 25
1
Dedup FS on 5.8
Hey folks,
I have a 14TB disk array that I want to use for rsnapshot backups, and
am considering putting a dedup FS onto it. I know I've got about a TB
of duplication, at least. And it is not easy to remove manually.
Google lands me LessFS and SDFS as the prime candidates.
thanks,
-Alan
--
?Don't eat anything you've ever seen advertised on TV?
? ? ? ?? - Michael Pollan, author
2013 Apr 01
5
[RFC] Online dedup for Btrfs
Hello,
I was bored this weekend so I hacked up online dedup for Btrfs. It''s working
quite well so I think it can be more widely tested. There are two ways to use
it
1) Compatible mode - this is a bit slower but will handle being used by older
kernels. We use the csum tree to find duplicate blocks. Since it is relatively
easy to have crc32c collisions this also involves reading the
2012 May 17
6
SSD format/mount parameters questions
For using SSDs:
Are there any format/mount parameters that should be set for using btrfs
on SSDs (other than the "ssd" mount option)?
General questions:
How long is the ''delay'' for the delayed alloc?
Are file allocations aligned to 4kiB boundaries, or larger?
What byte value is used to pad unused space?
(Aside: For some, the erased state reads all 0x00, and for
2011 Jun 23
0
Using compression on SSD
Hi!
I scanned for relevant topics in the last two years but except for putting
a swap file on compress=lzo this march I didnĀ“t found anything.
Does compression make sense on SSD? Or more specifically:
1) In what chunk sizes does BTRFS compress? How much data is affected when
a byte is changed in a 2 GB file or so? Can compression cause more writes
to the SSD in extreme circumstances?
2) It
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
When I ran a
2006 Dec 18
3
ZFS on Mac - new sighting
There''s been another sighting of ZFS on Mac. The latest developer
release of Leopard (Mac OS 10.5) has a dialogue box calling out the
"Zettabyte File System (ZFS)" as an option. The first publication I
saw this is a French website called Mac4Ever - http://mac4ever.com/
news/27485/zettabyte_sur_leopard/
I put up a Babelfish translation at my site, http://storagemojo.com/?
2006 Dec 19
2
Dedupping Has_many through, :unique=>true
Hi, In the Agile book, it is told that by putting a :unique => true will
dedup the row with ActiveRecord.
But it''s not working out for me. Do I need edge rails for this?
I simply want to dedup any join model associations, for instance:
category_id | inventory_id
384 1 first entry
384 2 this would be ok.
384 1 this would
2010 Aug 18
10
Networker & Dedup @ ZFS
Hi,
We are considering using a ZFS based storage as a staging disk for Networker. We''re aiming at
providing enough storage to be able to keep 3 months worth of backups on disk, before it''s moved
to tape.
To provide storage for 3 months of backups, we want to utilize the dedup functionality in ZFS.
I''ve searched around for these topics and found no success stories,