similar to: Understanding VDO vs ZFS

Displaying 20 results from an estimated 6000 matches similar to: "Understanding VDO vs ZFS"

2020 May 03
2
Understanding VDO vs ZFS
sorry corrections: For this test I created a 40GB lvm volume group with /dev/sdb and /dev/sdc then a 40GB LV then a 60GB VDO vol (for testing purposes) vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving percent' output from just created vdoas [root at localhost ~]# vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving percent' physical blocks : 10483712
2020 May 03
0
Understanding VDO vs ZFS
On May 3, 2020 8:33:33 AM GMT+03:00, Erick Perez - Quadrian Enterprises <eperez at quadrianweb.com> wrote: >sorry corrections: >For this test I created a 40GB lvm volume group with /dev/sdb and >/dev/sdc >then a 40GB LV >then a 60GB VDO vol (for testing purposes) > >vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving percent' >output from just created
2020 May 04
0
Understanding VDO vs ZFS
Hi David, in my opinion, VDO isn't worth the effort. I tried VDO for the same use case: backups. My dataset is 2-3TB and I backup daily. Even with a smaller dataset, VDO couldn't stand up to it's promises. It used tons of CPU and memory and with a lot of tuning I could get it to kind of work, but it became corrupted at the slightest problem (even a shutdown could do this, and
2020 May 03
0
Understanding VDO vs ZFS
My two cents: 1- Do you have an encrypted filesystem on top of VDO? If yes, you will see no benefit from dedupe. 2- can you post the stats of vdostats ?verbose /dev/mapper/xxxxx (replace with your device) you can do something like: "vdostats -verbose /dev/mapper/xxxxxxxx | grep -B6 'save percentage' On Sat, May 2, 2020 at 9:54 PM david <david at daku.org> wrote: >
2018 Sep 03
1
VDO killed my server
Folks I was impressed with the description of VDO (Virtual Device Optimizer?) in the RedHat documentaion, so much that I tried to use it. The tutorials led me to a few commands. I built a VDO device on top of two USB disks which I made into a Logical Volume, and I was ready to go. In my test case, I had a file set of about 600 GB. There was 5 TB of space between the two disk LVMs. So, I
2020 Jun 16
1
LUKS layer / best practice
Also, if you want to use deduplication (via VDO) then you must remember to "dedupe then encrypt" Storage > LUKS > VDO > LVM old but good reference to: https://access.redhat.com/articles/2106521 On Tue, Jun 16, 2020 at 3:00 PM Jason Edgecombe <jwedgeco at uncc.edu> wrote: > > I recommend having LUKS be "under" LVM. the layers would be: > /dev/sda ->
2018 Aug 31
0
vdo statustics on Dedup?
Folks I've started to use "vdo" instead of zfs in Centos 7. I hope this is a wise decision. However, I'm a bit mystified in decoding the "vdostats" output. I'd like to figure out how well deduplication is working. One measure would be to find two numbers: L = How many bocks are in use as reported to tools like df P = How many actual blocks are in use.
2020 Jun 16
2
LUKS layer / best practice
Hi all, with regard to LUKS; should it placed before LVM or after? Any recommendations? TRIM command fully supported through all layers etc? -- Leon
2013 Jun 26
6
[PROGS PATCH] Import btrfs-extent-same
Originally from https://github.com/markfasheh/duperemove/blob/master/btrfs-extent-same.c Signed-off-by: Gabriel de Perthuis <g2p.code+btrfs@gmail.com> --- .gitignore | 1 + Makefile | 2 +- btrfs-extent-same.c | 145 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 147 insertions(+), 1 deletion(-) create mode 100644 btrfs-extent-same.c diff
2011 Jan 05
52
Offline Deduplication for Btrfs
Here are patches to do offline deduplication for Btrfs. It works well for the cases it''s expected to, I''m looking for feedback on the ioctl interface and such, I''m well aware there are missing features for the userspace app (like being able to set a different blocksize). If this interface is acceptable I will flesh out the userspace app a little more, but I believe the
2010 Mar 05
2
ZFS replication send/receive errors out
My full backup script errorred out the last two times I ran it. I''ve got a full Bash trace of it, so I know exactly what was done. There are a moderate number of snapshots on the zp1 pool, and I''m intending to replicate the whole thing into the backup pool. After housekeeping, I take make a current snapshot on the data pool (zp1). Since this is a new full backup, I then
2009 Nov 02
24
dedupe is in
Deduplication was committed last night by Mr. Bonwick: > Log message: > PSARC 2009/571 ZFS Deduplication Properties > 6677093 zfs should have dedup capability http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html Via c0t0d0s0.org.
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote: > Brent, > > I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue. > > The other issue I noticed is that, as opposed to the
2013 May 05
10
Possible to dedpulicate read-only snapshots for space-efficient backups
Hey list, I wonder if it is possible to deduplicate read-only snapshots. Background: I''m using an bash/rsync script[1] to backup my whole system on a nightly basis to an attached USB3 drive into a scratch area, then take a snapshot of this area. I''d like to have these snapshots immutable, so they should be read-only. Since rsync won''t discover moved files but
2010 Aug 18
10
Networker & Dedup @ ZFS
Hi, We are considering using a ZFS based storage as a staging disk for Networker. We''re aiming at providing enough storage to be able to keep 3 months worth of backups on disk, before it''s moved to tape. To provide storage for 3 months of backups, we want to utilize the dedup functionality in ZFS. I''ve searched around for these topics and found no success stories,
2009 Nov 24
9
Best practices for zpools on zfs
Suppose I have a storage server that runs ZFS, presumably providing file (NFS) and/or block (iSCSI, FC) services to other machines that are running Solaris. Some of the use will be for LDoms and zones[1], which would create zpools on top of zfs (fs or zvol). I have concerns about variable block sizes and the implications for performance. 1.
2009 Oct 30
30
Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto
For the encryption functionality in the ZFS filesystem we use AES in CCM or GCM mode at the block level to provide confidentiality and authentication. There is also a SHA256 checksum per block (of the ciphertext) that forms a Merkle tree of all the blocks in the pool. Note that I have to store the full IV in the block. A block here is a ZFS block which is any power of two from 512 bytes to
2013 Aug 22
3
Deduplication
Hello, some questions regarding btrfs deduplication. - What is the state of it? Is it "safe" to use? https://btrfs.wiki.kernel.org/index.php/Deduplication does not yield much information. - https://pypi.python.org/pypi/bedup says: "bedup looks for new and changed files, making sure that multiple copies of identical files share space on disk. It integrates deeply with btrfs so
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a
2019 Mar 04
2
Removing a mailbox from a dovecot cluster
Le lun. 4 mars 2019 ? 12:48, Gerald Galster via dovecot <dovecot at dovecot.org> a ?crit : > > Hallo Francis, > > have you tried removing the account from your ldap? If dovecot has no > information about a particular user, it won't replicate. > > Then you would have to delete the mailbox (on both cluster nodes) from the > filesystem (rm -rf /path/to/mailbox) >