Is there anyone here using ZFS on top of a GELI-encrypted provider on hardware which could be considered "slow" by today's standards? What are the performance implications of doing this? The reason I am asking is that I am in the process of building a small home NAS/webserver, starting with a single disk (intending to expand as the need arises) on the following hardware: http://www.tranquilpc-shop.co.uk/acatalog/BAREBONE_SERVERS.html This is essentially: Intel Arom 330 1.6 Ghz dualcore on an Intel D945GCLF2-based board with 2GB Ram, the first disk I am going to use is a 1.5TB Western Digital Caviar Green. I had someone run a few openssl crypto benchmarks (to unscientifically assess the maximum possible GELI performance) on a machine running FreeBSD on nearly the same hardware and it seems the CPU would become the bottleneck at roughly 200 MB/s throughput when using 128 bit Blowfish, 70 MB/s when using AES128 and 55 MB/s when using AES256. This, on it's own is definately enough for my neeeds (especially in the case of using Blowfish), but what are the performance implications of using ZFS on top of a GELI-encrypted provider? Also, free free to criticize my planned filesystem layout for the first disk of this system, the idea behind /mnt/sysbackup is to take a snapshot of the FreeBSD installation and it's settings before doing potentially hazardous things like upgrading to a new -RELEASE: ad1s1 (freebsd system slice) ad1s1a => 128bit Blowfish ad1s1a.eli 4GB swap ad1s1b 128GB ufs2+s / ad1s1c 128GB ufs2+s noauto /mnt/sysbackup ad1s2 => 128bit Blowfish ad1s2.eli zpool /home /mnt/data1 Thanks for your input. - Dan Naumov
> Is there anyone here using ZFS on top of a GELI-encrypted provider on > hardware which could be considered "slow" by today's standards? WhatI run a mirrored zpool on top of a pair of 1TB SATA drives - they are only 7200 rpm so pretty dog slow as far as I'm concerned. The CPOU is a dual core Athlon 6400, and I am running amd64. The performance is not brilliant - about 25 meg/second writing a file, and about 53 meg/second reading it. It's a bit dissapointing really - thats a lot slower that I expected when I built it, especially the write speed. -pete.
Dan Naumov wrote:> Is there anyone here using ZFS on top of a GELI-encrypted provider on > hardware which could be considered "slow" by today's standards? What > are the performance implications of doing this? The reason I am asking > is that I am in the process of building a small home NAS/webserver, > starting with a single disk (intending to expand as the need arises) > on the following hardware: > http://www.tranquilpc-shop.co.uk/acatalog/BAREBONE_SERVERS.html This > is essentially: Intel Arom 330 1.6 Ghz dualcore on an Intel > D945GCLF2-based board with 2GB Ram, the first disk I am going to use > is a 1.5TB Western Digital Caviar Green. > > I had someone run a few openssl crypto benchmarks (to unscientifically > assess the maximum possible GELI performance) on a machine running > FreeBSD on nearly the same hardware and it seems the CPU would become > the bottleneck at roughly 200 MB/s throughput when using 128 bit > Blowfish, 70 MB/s when using AES128 and 55 MB/s when using AES256. > This, on it's own is definately enough for my neeeds (especially in > the case of using Blowfish), but what are the performance implications > of using ZFS on top of a GELI-encrypted provider?I have a zpool mirror on top of two 128bit GELI blowfish devices with Sectorsize 4096, my system is a D945GCLF2 with 2GB RAM and a Intel Arom 330 1.6 Ghz dualcore. The two disks are a WDC WD10EADS and a WD10EACS (5400rpm). The system is running 8.0-CURRENT amd64. I have set kern.geom.eli.threads=3. This is far from a real benchmarks but: Using dd with bs=4m I get 35 MByte/s writing to the mirror (writing 35 MByte/s to each disk) and 48 MByte/s reading from the mirror (reading with 24 MByte/s from each disk). My experience is that ZFS is not much of an overhead and will not degrade the performance as much as the encryption, so GELI is the limiting factor. Using ZFS without GELI on this system gives way higher read and write numbers, like reading with 70 MByte/s per disk etc. greetings, philipp
Thank you for your numbers, now I know what to expect when I get my new machine, since our system specs look identical. So basically on this system: unencrypted ZFS read: ~70 MB/s per disk 128bit Blowfish GELI/ZFS write: 35 MB/s per disk 128bit Blowfish GELI/ZFS read: 24 MB/s per disk I am curious what part of GELI is so inefficient to cause such a dramatic slowdown. In comparison, my home desktop is a C2D E6600 2,4 Ghz, 4GB RAM, Intel DP35DP, 1 x 1,5TB Seagate Barracuda - Windows Vista x64 SP1 Read/Write on an unencrypted NTFS partition: ~85 MB/s Read/Write on a Truecrypt AES-encrypted NTFS partition: ~65 MB/s As you can see, the performance drop is noticeable, but not anywhere nearly as dramatic. - Dan Naumov> I have a zpool mirror on top of two 128bit GELI blowfish devices with > Sectorsize 4096, my system is a D945GCLF2 with 2GB RAM and a Intel Arom > 330 1.6 Ghz dualcore. The two disks are a WDC WD10EADS and a WD10EACS > (5400rpm). The system is running 8.0-CURRENT amd64. I have set > kern.geom.eli.threads=3. > > This is far from a real benchmarks but: > > Using dd with bs=4m I get 35 MByte/s writing to the mirror (writing 35 > MByte/s to each disk) and 48 MByte/s reading from the mirror (reading > with 24 MByte/s from each disk). > > My experience is that ZFS is not much of an overhead and will not > degrade the performance as much as the encryption, so GELI is the > limiting factor. Using ZFS without GELI on this system gives way higher > read and write numbers, like reading with 70 MByte/s per disk etc. > > greetings, > philipp
On Fri, 29.05.2009 at 11:19:44 +0300, Dan Naumov wrote:> Also, free free to criticize my planned filesystem layout for the > first disk of this system, the idea behind /mnt/sysbackup is to take a > snapshot of the FreeBSD installation and it's settings before doing > potentially hazardous things like upgrading to a new -RELEASE: > > ad1s1 (freebsd system slice) > ad1s1a => 128bit Blowfish ad1s1a.eli 4GB swap > ad1s1b 128GB ufs2+s / > ad1s1c 128GB ufs2+s noauto /mnt/sysbackup > > ad1s2 => 128bit Blowfish ad1s2.eli > zpool > /home > /mnt/data1Hi Dan, everybody has different needs, but what exactly are you doing with 128GB of / ? What I did is the following: 2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks, CF2ATA adapters suck, but then again, which Mobo has internal USB ports?) Filesystem 1024-blocks Used Avail Capacity Mounted on /dev/ad0a 507630 139740 327280 30% / /dev/ad0d 1453102 1292296 44558 97% /usr /dev/md0 253678 16 233368 0% /tmp /usr is quite crowded, but I just need to clean up some ports again. /var, /usr/src, /home, /usr/obj, /usr/ports are all on the GELI+ZFS pool. If /usr turns out to be to small, I can also move /usr/local there. That way booting and single user involves trusty old UFS only. I also do regular dumps from the UFS filesystems to the ZFS tank, but there's really no sacred data under / or /usr that I would miss if the system crashed (all configuration changes are tracked using mercurial). Anyway, my point is to use the full disks for GELI+ZFS whenever possible. This makes it more easy to replace faulty disks or grow ZFS pools. The FreeBSD base system, I would put somewhere else. Cheers, Ulrich Sp?rlein -- http://www.dubistterrorist.de/
Ulrich Sp?rlein wrote:> 2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks, > CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)Many has internal USB header. http://www.logicsupply.com/products/afap_082usb