Displaying 20 results from an estimated 5000 matches similar to: "Deduplication - deleting the original"
2009 Nov 02
24
dedupe is in
Deduplication was committed last night by Mr. Bonwick:
> Log message:
> PSARC 2009/571 ZFS Deduplication Properties
> 6677093 zfs should have dedup capability
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html
Via c0t0d0s0.org.
2009 Nov 03
2
SunOS neptune 5.11 snv_127 sun4u sparc SUNW, Sun-Fire-880
I just went through a BFU update to snv_127 on a V880 :
neptune console login: root
Password:
Nov 3 08:19:12 neptune login: ROOT LOGIN /dev/console
Last login: Mon Nov 2 16:40:36 on console
Sun Microsystems Inc. SunOS 5.11 snv_127 Nov. 02, 2009
SunOS Internal Development: root 2009-Nov-02 [onnv_127-tonic]
bfu''ed from /build/archives-nightly-osol/sparc on 2009-11-03
I have [
2013 Aug 22
3
Deduplication
Hello,
some questions regarding btrfs deduplication.
- What is the state of it? Is it "safe" to use?
https://btrfs.wiki.kernel.org/index.php/Deduplication does not yield
much information.
- https://pypi.python.org/pypi/bedup says: "bedup looks for new and
changed files, making sure that multiple copies of identical files
share space on disk. It integrates deeply with btrfs so
2009 Jul 31
4
Zfs deduplication
Will the material ever be posted. Looks there is some huge bugs with zfs
deduplication that the organizers do not want to post it also there is no
indication on sun website if there will be a deduplication feature. I think
its best they concentrate on improving zfs performance and speed with
compression enabled.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Apr 30
5
Mail deduplication
Hi Guys,
I am wondering about mail deduplication. I am looking into the possibility
of seperating out all of the message bodies with multiple parts inside mail
that is recived from `dovecot` and hashing them all.
The idea is that by hashing all of the parts inside the email, I will be
able to ensure that each part of the email will only be saved once.
This means that attachments & common
2009 Oct 23
5
PSARC 2009/571: ZFS deduplication properties
I haven''t seen any mention of it in this forum yet, so FWIW you might be interested in the details of ZFS deduplication mentioned in this recently-filed case.
Case log: http://arc.opensolaris.org/caselog/PSARC/2009/571/
Discussion: http://www.opensolaris.org/jive/thread.jspa?threadID=115507
Very nice -- I like the interaction with "copies", and (like a few others) I think
2017 Dec 16
3
[RFC] - Deduplication of debug information in linkers (LLD).
?But could not we for example do split dwarf, but for example do dedup of types ?
I do not mean right now, but in a theory ?
Best regards,
George | Developer | Access Softek, Inc
________________________________
От: David Blaikie <dblaikie at gmail.com>
Отправлено: 16 декабря 2017 г. 22:25
Кому: George Rimar
Копия: Sean Silva; llvm-dev at lists.llvm.org; Rui Ueyama; Rafael Espindola
Тема:
2009 Nov 16
2
ZFS Deduplication Replication
Hello;
Dedup on ZFS is an absolutely wonderful feature!
Is there a way to conduct dedup replication across boxes from one dedup
ZFS data set to another?
Warmest Regards
Steven Sim
2012 Aug 27
7
Deduplication data for CentOS?
Hi list,
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more friendly...
We have looked into lessfs, sdfs and ddar.
Are these filesystems ready to use (on centos)?
ddar is sthg different, I know.
Thx
Rainer
2011 Jan 05
52
Offline Deduplication for Btrfs
Here are patches to do offline deduplication for Btrfs. It works well for the
cases it''s expected to, I''m looking for feedback on the ioctl interface and
such, I''m well aware there are missing features for the userspace app (like
being able to set a different blocksize). If this interface is acceptable I
will flesh out the userspace app a little more, but I believe the
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared
The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x.
--
This
2017 Dec 16
2
[RFC] - Deduplication of debug information in linkers (LLD).
>Wasn't our (lld/ELF's) position on debug info size that we should focus on providing a great split-dwarf workflow and not try go too far out of our way to deduplicate >or otherwise reduce debug info size inside LLD? I recall there being some patches that made linking of large debug binaries like 1.5GB+ clang faster, but we decided to >reject those changes because split-dwarf was
2011 Jun 08
4
On-delivery deduplication?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hi,
A feature of Cyrus-IMAPd I really missed after migrated to Dovecot is
their optional "duplicate suppression", which eliminates duplicate
message at deliver time, if their envelope sender, recipient and
message-id matches. For example, if one subscribes to a mailing list,
and someone hit "Reply All" to reply to him, there
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote:
> Brent,
>
> I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue.
>
> The other issue I noticed is that, as opposed to the
2009 Oct 30
30
Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto
For the encryption functionality in the ZFS filesystem we use AES in CCM
or GCM mode at the block level to provide confidentiality and
authentication. There is also a SHA256 checksum per block (of the
ciphertext) that forms a Merkle tree of all the blocks in the pool.
Note that I have to store the full IV in the block. A block here is a
ZFS block which is any power of two from 512 bytes to
2017 Jun 10
3
Non-destructive deduplication
Greetings.
I use Dovecot 2.2.29.1 as my IMAP server. Owing to a bug in my mail
client [1], several unique messages (mostly in my Sent folder) have
duplicate Message-ID headers. Dovecot itself doesn't seem to be
bothered by this, though my mail client is confused by the false
duplicates. (It screws up the threading display, and results in data
loss when I run the client's deduplication
2013 Jun 26
6
[PROGS PATCH] Import btrfs-extent-same
Originally from
https://github.com/markfasheh/duperemove/blob/master/btrfs-extent-same.c
Signed-off-by: Gabriel de Perthuis <g2p.code+btrfs@gmail.com>
---
.gitignore | 1 +
Makefile | 2 +-
btrfs-extent-same.c | 145 ++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 147 insertions(+), 1 deletion(-)
create mode 100644 btrfs-extent-same.c
diff
2010 Jan 22
3
mailbox format w/ separate headers/data
In the future, it would be cool if there were a mailbox format (dbox2?)
where mail headers and each mime part were stored in separate files.
This would enable the zfs dedup feature to be used to maximum benefit.
In the zfs filesystem, there is a dedup feature which stores only 1 copy
of duplicate blocks. In a normal mail file, the headers will be
different for each recipient and the chances of
2013 May 05
10
Possible to dedpulicate read-only snapshots for space-efficient backups
Hey list,
I wonder if it is possible to deduplicate read-only snapshots.
Background:
I''m using an bash/rsync script[1] to backup my whole system on a nightly
basis to an attached USB3 drive into a scratch area, then take a snapshot of
this area. I''d like to have these snapshots immutable, so they should be
read-only.
Since rsync won''t discover moved files but
2017 Dec 15
3
[RFC] - Deduplication of debug information in linkers (LLD).
>Not quite sure what you mean by "on linker side" - but I guess you mean using linker features like comdats etc, rather than DWARF parsing/reassembly/etc.
I mean that it probably not a good idea for external library. I feel it is much more convinent to do such proccessing in a linker.
Linker do and knows much more about things like sections that are ICFed, eliminated, about COMDATs