similar to: Many questions from a potential btrfs user

Displaying 20 results from an estimated 10000 matches similar to: "Many questions from a potential btrfs user"

2013 May 05
10
Possible to dedpulicate read-only snapshots for space-efficient backups
Hey list, I wonder if it is possible to deduplicate read-only snapshots. Background: I''m using an bash/rsync script[1] to backup my whole system on a nightly basis to an attached USB3 drive into a scratch area, then take a snapshot of this area. I''d like to have these snapshots immutable, so they should be read-only. Since rsync won''t discover moved files but
2013 Aug 22
3
Deduplication
Hello, some questions regarding btrfs deduplication. - What is the state of it? Is it "safe" to use? https://btrfs.wiki.kernel.org/index.php/Deduplication does not yield much information. - https://pypi.python.org/pypi/bedup says: "bedup looks for new and changed files, making sure that multiple copies of identical files share space on disk. It integrates deeply with btrfs so
2013 Aug 06
6
[PATCH 0/4] btrfs: out-of-band (aka offline) dedupe v4
Hi, The following series of patches implements in btrfs an ioctl to do out-of-band deduplication of file extents. To be clear, this means that the file system is mounted and running, but the dedupe is not done during file writes, but after the fact when some userspace software initiates a dedupe. The primary patch is loosely based off of one sent by Josef Bacik back in January, 2011.
2013 Jun 26
6
[PROGS PATCH] Import btrfs-extent-same
Originally from https://github.com/markfasheh/duperemove/blob/master/btrfs-extent-same.c Signed-off-by: Gabriel de Perthuis <g2p.code+btrfs@gmail.com> --- .gitignore | 1 + Makefile | 2 +- btrfs-extent-same.c | 145 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 147 insertions(+), 1 deletion(-) create mode 100644 btrfs-extent-same.c diff
2013 Jan 30
9
Poor performance of btrfs. Suspected unidentified btrfs housekeeping process which writes a lot
Welcome, I''ve been using btrfs for over a 3 months to store my personal data on my NAS server. Almost all interactions with files on the server are done using unison synchronizer. After another use of bedup (https://github.com/g2p/bedup) on my btrfs volume I experienced huge perfomance loss with synchronization. It now takes over 3 hours what have taken only 15 minutes! File
2011 Jan 05
52
Offline Deduplication for Btrfs
Here are patches to do offline deduplication for Btrfs. It works well for the cases it''s expected to, I''m looking for feedback on the ioctl interface and such, I''m well aware there are missing features for the userspace app (like being able to set a different blocksize). If this interface is acceptable I will flesh out the userspace app a little more, but I believe the
2020 May 03
9
Understanding VDO vs ZFS
Folks I'm looking for a solution for backups because ZFS has failed on me too many times. In my environment, I have a large amount of data (around 2tb) that I periodically back up. I keep the last 5 "snapshots". I use rsync so that when I overwrite the oldest backup, most of the data is already there and the backup completes quickly, because only a small number of files have
2014 Apr 24
0
bedup - De-duplication and snapshots
Dear All, I have a very slow deduplication going on on an external usb disk. I'm using it for backups - I rsync the relevent files to the disk and then take a snapshot. I then deduplicate with bedup dedup <disk-mount-point> What I am finding is that it is reporting a deduplication between the data on the disk and its snapshot e.g.: Deduplicated: -
2017 Dec 16
3
[RFC] - Deduplication of debug information in linkers (LLD).
?But could not we for example do split dwarf, but for example do dedup of types ? I do not mean right now, but in a theory ? Best regards, George | Developer | Access Softek, Inc ________________________________ От: David Blaikie <dblaikie at gmail.com> Отправлено: 16 декабря 2017 г. 22:25 Кому: George Rimar Копия: Sean Silva; llvm-dev at lists.llvm.org; Rui Ueyama; Rafael Espindola Тема:
2009 Nov 02
24
dedupe is in
Deduplication was committed last night by Mr. Bonwick: > Log message: > PSARC 2009/571 ZFS Deduplication Properties > 6677093 zfs should have dedup capability http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html Via c0t0d0s0.org.
2013 Apr 30
5
Mail deduplication
Hi Guys, I am wondering about mail deduplication. I am looking into the possibility of seperating out all of the message bodies with multiple parts inside mail that is recived from `dovecot` and hashing them all. The idea is that by hashing all of the parts inside the email, I will be able to ensure that each part of the email will only be saved once. This means that attachments & common
2009 Dec 30
3
what happens to the deduptable (DDT) when you set dedup=off ???
I tried the deduplication feature but the performance of my fileserver dived from writing 50MB/s via CIFS to 4MB/s. what happens to the deduped blocks when you set dedup=off? are they written back to disk? is the deduptable deleted or is it still there? thanks -- This message posted from opensolaris.org
2010 Dec 08
5
very slow boot: stuck at mounting zfs filesystems
Hello list, I''m having trouble with a server holding a lot of data. After a few months of uptime, it is currently rebooting from a lockup (reason unknown so far) but it is taking hours to boot up again. The boot process is stuck at the stage where it says: mounting zfs filesystems (1/5) the machine responds to pings and keystrokes. I can see disk activity; the disk leds blink one after
2011 Jan 06
3
Offline Deduplication for Btrfs V2
Just a quick update, I''ve dropped the hashing stuff in favor of doing a memcmp in the kernel to make sure the data is still the same. The thing that takes a while is reading the data up from disk, so doing a memcmp of the entire buffer isn''t that big of a deal, not to mention there''s a possiblity for malicious users if there is a problem with the hashing algorithms we
2010 Jul 21
3
File cloning
Hello, I''ve recently joined this list, primarily because of a thread I found from late April ("Is file cloning anywhere on ZFS roadmap") asking about file-level cloning in ZFS. Based on that thread I understand that it''s not currently possible to ''clone'' files instead of ''copying'' them, but the thread didn''t answer the
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote: > Brent, > > I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue. > > The other issue I noticed is that, as opposed to the
2011 Oct 23
2
GlusterFS over lessfs/opendedupe
Hi, I'm currently running GlusterFS over XFS, and it works quite well. I'm wondering if it's possible to add data deduplication into the mix by: glusterfs --> lessfs --> xfs or glusterfs --> opendedupe --> xfs Has anybody tried doing this? We're running VM images on gluster, and I figure we could get a bit of space saving bu deduplicating the data. Gerald
2010 Mar 02
3
BackupPC, per-dir hard link limit, Debian packaging
I realise that the hard link limit is in the queue to fix, and I read the recent thread as well as the older (october I think) thread. I just wanted to note that BackupPC *does* in fact run into the hard link limit, and its due to the dpkg configuration scripts. BackupPC hard links files with the same content together by scanning new files and linking them together, whether or not they started
2012 Aug 27
7
Deduplication data for CentOS?
Hi list, is there any working solution for deduplication of data for centos? We are trying to find a solution for our backup server which runs a bash script invoking xdelta(3). But having this functionality in fs is much more friendly... We have looked into lessfs, sdfs and ddar. Are these filesystems ready to use (on centos)? ddar is sthg different, I know. Thx Rainer
2010 Jan 22
3
mailbox format w/ separate headers/data
In the future, it would be cool if there were a mailbox format (dbox2?) where mail headers and each mime part were stored in separate files. This would enable the zfs dedup feature to be used to maximum benefit. In the zfs filesystem, there is a dedup feature which stores only 1 copy of duplicate blocks. In a normal mail file, the headers will be different for each recipient and the chances of