Hello list. I am evaluating options for my new upcoming storage system, where for various reasons the data will be stored on 2 x 2tb SATA disk in a mirror and has to be encrypted (a 40gb Intel SSD will be used for the system disk). Right now I am considering the options of FreeBSD with GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here made any benchmarks regarding how much of a performance hit is caused by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (a similar configuration is described here: http://blog.experimentalworks.net/2008/03/setting-up-an-encrypted-zfs-with-freebsd/)? Some direct comparisons using bonnie++ or similar, showing the number differences of "this is read/write/IOPS on top of a ZFS mirror and this is read/write/IOPS on top of a ZFS mirror using GELI" would be nice. I am mostly interested in benchmarks on lower end hardware, the system is an Atom 330 which is currently using Windows 2008 server with TrueCrypt in a non-raid configuration and with that setup, I am getting roughly 55mb/s reads and writes when using TrueCrypt (nonencrypted it's around 115mb/s). Thanks. - Sincerely, Dan Naumov
On Sun, Jan 10, 2010 at 05:08:29PM +0200, Dan Naumov wrote:> Hello list. > > I am evaluating options for my new upcoming storage system, where for > various reasons the data will be stored on 2 x 2tb SATA disk in a > mirror and has to be encrypted (a 40gb Intel SSD will be used for the > system disk). Right now I am considering the options of FreeBSD with > GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here > made any benchmarks regarding how much of a performance hit is caused > by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (a > similar configuration is described here: > http://blog.experimentalworks.net/2008/03/setting-up-an-encrypted-zfs-with-freebsd/)? > Some direct comparisons using bonnie++ or similar, showing the number > differences of "this is read/write/IOPS on top of a ZFS mirror and > this is read/write/IOPS on top of a ZFS mirror using GELI" would be > nice. > > I am mostly interested in benchmarks on lower end hardware, the system > is an Atom 330 which is currently using Windows 2008 server with > TrueCrypt in a non-raid configuration and with that setup, I am > getting roughly 55mb/s reads and writes when using TrueCrypt > (nonencrypted it's around 115mb/s).Although I cannot comment on ZFS, my $HOME partition is UFS2+geli. Reads (with dd) of uncached big[1] files are ~70MB/s. Reading an unchached big file from a non-encrypted UFS2 partition is ~120MB/s. Note that the vfs cache has a huge influence here; Repeating the same read will be 4 ? 7 times faster! The sysctls for ZFS chaching will probably have a big impact too. Roland [1] several 100s of MiB. -- R.F.Smith http://www.xs4all.nl/~rsmith/ [plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated] pgp: 1A2B 477F 9970 BA3C 2914 B7CE 1277 EFB0 C321 A725 (KeyID: C321A725) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20100110/1bcda082/attachment.pgp
On Sun, Jan 10, 2010 at 6:12 PM, Damian Gerow <dgerow@afflictions.org> wrote:> Dan Naumov wrote: > : I am mostly interested in benchmarks on lower end hardware, the system > : is an Atom 330 which is currently using Windows 2008 server with > : TrueCrypt in a non-raid configuration and with that setup, I am > : getting roughly 55mb/s reads and writes when using TrueCrypt > : (nonencrypted it's around 115mb/s). > > I've been using GELI-backed vdevs for some time now -- since 7.2-ish > timeframes. ?I've never benchmarked it, but I was running on relatively > low-end hardware. ?A few things to take into consideration: > > 1) Make sure the individual drives are encrypted -- especially if they're > ? >=1TB. ?This is less a performance thing and more a "make sure your > ? encryption actually encrypts properly" thing. > 2) Seriously consider using the new AHCI driver. ?I've been using it in a > ? few places, and it's quite stable, and there is a marked performance > ? improvement - 10-15% on the hardware I've got. > 3) Take a look at the VIA platform, as a replacement for the Atom. ?I was > ? running on an EPIA-SN 1800 (1.8GHz), and didn't have any real troubles > ? with the encryption aspect of the rig (4x1TB drives). ?Actually, if you > ? get performance numbers privately comparing the Atom to a VIA (Nano or > ? otherwise), can you post them to the list? ?I'm curious to see if the > ? on-chip encryption actually makes a difference. > 4) Since you're asking for benchmarks, probably best if you post the > ? specific bonnie command you want run -- that way, it's tailored to your > ? use-case, and you'll get consistant, comparable results.Yes, this is what I was basically considering: new AHCI driver => 40gb Intel SSD => UFS2 with Softupdates for the system installation new AHCI driver => 2 x 2tb disks, each fully encrypted with geli => 2 geli vdevs for a ZFS mirror for important data The reason I am considering the new AHCI driver is to get NCQ support now and TRIM support for the SSD later when it gets implemented, although if the performance difference right now is already 10-15%, that's a reason good enough on it's own. On a semi-related note, is it still recommended to use softupdates or is GJournal a better choice today? - Sincerely, Dan Naumov
On Sun, Jan 10, 2010 at 8:46 PM, Damian Gerow <dgerow@afflictions.org> wrote:> Dan Naumov wrote: > : Yes, this is what I was basically considering: > : > : new AHCI driver => 40gb Intel SSD => UFS2 with Softupdates for the > : system installation > : new AHCI driver => 2 x 2tb disks, each fully encrypted with geli => 2 > : geli vdevs for a ZFS mirror for important data > > If performance is an issue, you may want to consider carving off a partition > on that SSD, geli-fying it, and using it as a ZIL device. ?You'll probably > see a marked performance improvement with such a setup.That is true, but using a single device for a dedicated ZIL is a huge no-no, considering it's an intent log, it's used to reconstruct the pool in case of a power failure for example, should such an event occur at the same time as a ZIL provider dies, you lose the entire pool because there is no way to recover it, so if ZIL gets put "elsewhere", that elsewhere really should be a mirror and sadly I don't see myself affording to use 2 SSDs for my setup :) - Sincerely, Dan Naumov
> GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here > made any benchmarks regarding how much of a performance hit is caused > by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (aI havent done it directly on the same boxes, but I have two systems with idenitical drives, each with a ZFS mirror pool, one wth GELI, and one without. Simple read test shows no overhead in using GELI at all. I would recommend using the new AHCI driver though - greatly improves throughput. -pete.
On Mon, Jan 11, 2010 at 7:30 PM, Pete French <petefrench@ticketswitch.com> wrote:>> GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here >> made any benchmarks regarding how much of a performance hit is caused >> by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (a > > I havent done it directly on the same boxes, but I have two systems > with idenitical drives, each with a ZFS mirror pool, one wth GELI, and > one without. Simple read test shows no overhead in using GELI at all. > > I would recommend using the new AHCI driver though - greatly > improves throughput.How fast is the CPU in the system showing no overhead? Having no noticable overhead whatsoever sounds extremely unlikely unless you are actually using it on something like a very modern dualcore or better. - Sincerely, Dan Naumov
2010/1/12 Rafa? Jackiewicz <freebsd@o2.pl>:> Two hdd Seagate ES2,Intel Atom 330 (2x1.6GHz), 2GB RAM: > > geli: > ? geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2 > ? geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2 > > zfs: > ? zpool create data01 ad4s2.eli > > df -h: > ? dev/ad6s2.eli.journal ? ?857G ? ?8.0K ? ?788G ? ? 0% ? ?/data02 > ? data01 ? ? ? ? ? ? ? ? ? ? ? ? ? 850G ? ?128K ? ?850G ? ? 0% ? ?/data01 > > srebrny# dd if=/dev/zero of=/data01/test bs=1M count=500 > 500+0 records in > 500+0 records out > 524288000 bytes transferred in 8.802691 secs (59559969 bytes/sec) > srebrny# dd if=/dev/zero of=/data02/test bs=1M count=500 > 500+0 records in > 500+0 records out > 524288000 bytes transferred in 20.090274 secs (26096608 bytes/sec) > > Rafal JackiewiczThanks, could you do the same, but using 2 .eli vdevs mirrorred together in a zfs mirror? - Sincerely, Dan Naumov
Two hdd Seagate ES2,Intel Atom 330 (2x1.6GHz), 2GB RAM: geli: geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2 geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2 zfs: zpool create data01 ad4s2.eli df -h: dev/ad6s2.eli.journal 857G 8.0K 788G 0% /data02 data01 850G 128K 850G 0% /data01 srebrny# dd if=/dev/zero of=/data01/test bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 8.802691 secs (59559969 bytes/sec) srebrny# dd if=/dev/zero of=/data02/test bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 20.090274 secs (26096608 bytes/sec) Rafal Jackiewicz
>Thanks, could you do the same, but using 2 .eli vdevs mirrorred >together in a zfs mirror? > >- Sincerely, >Dan NaumovHi, Proc: Intell Atom 330 (2x1.6Ghz) - 1 package(s) x 2 core(s) x 2 HTT threads Chipset: Intel 82945G Sys: 8.0-RELEASE FreeBSD 8.0-RELEASE #0 empty file: /boot/loader.conf Hdd: ad4: 953869MB <Seagate ST31000533CS SC15> at ata2-master SATA150 ad6: 953869MB <Seagate ST31000533CS SC15> at ata3-master SATA150 Geli: geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2 geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2 Results: **************************************************** *** single drive write MB/s read MB/s eli.journal.ufs2 23 14 eli.zfs 19 36 *** mirror write MB/s read MB/s mirror.eli.journal.ufs2 23 16 eli.zfs 31 40 zfs 83 79 *** degraded mirror write MB/s read MB/s mirror.eli.journal.ufs2 16 9 eli.zfs 56 40 zfs 86 71 **************************************************** ***** Single drive: ****** Mount: data01 on /data01 (zfs, local) /dev/ad6s2.eli.journal on /data02 (ufs, local, gjournal) *** (single hdd) UFS2, gjournal, eli. srebrny# time dd if=/dev/zero of=/data02/test01 bs=1m count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 92.451346 secs (22683845 bytes/sec) 0.068u 10.386s 1:32.46 11.2% 26+1497k 63+16066io 0pf+0w ** umount / mount, and: srebrny# time dd if=/data02/test01 of=/dev/null bs=1m 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 147.219379 secs (14245081 bytes/sec) 0.008u 4.456s 2:27.22 3.0% 23+1324k 16066+0io 0pf+0w *** (single hdd) zfs: srebrny# time dd if=/dev/zero of=/data01/test01 bs=1m count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 113.049629 secs (18550720 bytes/sec) 0.014u 5.480s 1:53.05 4.8% 26+1516k 0+0io 0pf+0w ** umount / mount, and: srebrny# time dd if=/data01/test01 of=/dev/null bs=1m 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 59.012860 secs (35537203 bytes/sec) 0.000u 3.135s 0:59.01 5.3% 24+1397k 0+0io 0pf+0w ***** Mirror: ***** *** (mirror hdd) UFS2, gjournal, eli. srebrny# gmirror list Geom name: data02 State: COMPLETE Components: 2 Balance: round-robin Slice: 4096 Flags: NONE GenID: 0 SyncID: 1 ** srebrny# time dd if=/dev/zero of=/data02/test01 bs=1m count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 89.441874 secs (23447094 bytes/sec) 0.022u 11.110s 1:29.45 12.4% 26+1515k 64+16066io 0pf+0w ** umount / mount, and: srebrny# time dd if=/data02/test01 of=/dev/null bs=1m 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 134.567914 secs (15584339 bytes/sec) 0.007u 4.333s 2:14.62 3.2% 26+1498k 16067+0io 0pf+0w *** (mirror hdd, eli) zfs: srebrny# time dd if=/dev/zero of=/data01/test01 bs=1m count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 67.255574 secs (31181832 bytes/sec) 0.029u 6.422s 1:07.25 9.5% 26+1531k 0+0io 0pf+0w ** (eli) umount / mount, and: srebrny# time dd if=/data01/test01 of=/dev/null bs=1m 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 52.307404 secs (40092833 bytes/sec) 0.036u 3.405s 0:52.31 6.5% 27+1543k 0+0io 0pf+0w *** (mirror hdd, no eli!) zfs: srebrny# time dd if=/dev/zero of=/data01/test01 bs=1m count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 25.185164 secs (83269341 bytes/sec) 0.000u 5.506s 0:25.18 21.8% 26+1513k 0+0io 0pf+0w ** (no eli!) umount / mount, and: srebrny# time dd if=/data01/test01 of=/dev/null bs=1m 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 26.457374 secs (79265312 bytes/sec) 0.000u 3.011s 0:26.45 11.3% 24+1396k 0+0io 0pf+0w ***** *** (mirror !!!degraded!!!, single drive ad4s2) UFS2, gjournal, eli. df -h /dev/mirror/data02.eli.journal 857G 8.0K 788G 0% /data02 ** srebrny# time dd if=/dev/zero of=/data02/test01 bs=1m count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 131.554958 secs (15941262 bytes/sec) 0.029u 10.057s 2:11.58 7.6% 26+1528k 64+16066io 0pf+0w ** (mirror !!!degraded!!!, single drive ad4s2) umount / mount, and: srebrny# time dd if=/data02/test01 of=/dev/null bs=1m 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 226.056204 secs (9277127 bytes/sec) 0.029u 4.226s 3:46.08 1.8% 26+1529k 16066+0io 0pf+0w *** (mirror !!!degraded!!!, single drive ad4s2; eli) zfs: srebrny# time dd if=/dev/zero of=/data01/test011 bs=1m count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 37.548232 secs (55852217 bytes/sec) 0.007u 5.480s 0:37.55 14.5% 26+1513k 0+0io 0pf+0w ** (mirror !!!degraded!!!, single drive ad4s2; eli) umount / mount, and: srebrny# time dd if=/data01/test011 of=/dev/null bs=1m 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 51.266119 secs (40907173 bytes/sec) 0.036u 3.238s 0:51.28 6.3% 28+1549k 0+0io 0pf+0w *** (mirror !!!degraded!!!, single drive ad4s2; no eli!) zfs: srebrny# time dd if=/dev/zero of=/data01/test011 bs=1m count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 24.296032 secs (86316646 bytes/sec) 0.000u 5.463s 0:24.29 22.4% 26+1512k 0+0io 0pf+0w ** (mirror !!!degraded!!!, single drive ad4s2; no eli!) umount / mount, and: srebrny# time dd if=/data01/test011 of=/dev/null bs=1m 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 29.512372 secs (71060096 bytes/sec) 0.036u 3.275s 0:29.51 11.1% 27+1563k 0+0io 0pf+0w Best regards, Rafal Jackiewicz
2010/1/12 Rafa? Jackiewicz <freebsd@o2.pl>:>>Thanks, could you do the same, but using 2 .eli vdevs mirrorred >>together in a zfs mirror? >> >>- Sincerely, >>Dan Naumov > > Hi, > > Proc: Intell Atom 330 (2x1.6Ghz) - 1 package(s) x 2 core(s) x 2 HTT threads > Chipset: Intel 82945G > Sys: 8.0-RELEASE FreeBSD 8.0-RELEASE #0 > empty file: /boot/loader.conf > Hdd: > ? ad4: 953869MB <Seagate ST31000533CS SC15> at ata2-master SATA150 > ? ad6: 953869MB <Seagate ST31000533CS SC15> at ata3-master SATA150 > Geli: > ? geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2 > ? geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2 > > > Results: > **************************************************** > > *** single drive ? ? ? ? ? ? ? ? ? ? ? ?write MB/s ? ? ?read ?MB/s > eli.journal.ufs2 ? ? ? ? ? ? ? ? ? ? ? ?23 ? ? ? ? ? ? ?14 > eli.zfs ? ? ? ? ? ? ? ? ? ? ? ? 19 ? ? ? ? ? ? ?36 > > > *** mirror ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?write MB/s ? ? ?read ?MB/s > mirror.eli.journal.ufs2 23 ? ? ? ? ? ? ?16 > eli.zfs ? ? ? ? ? ? ? ? ? ? ? ? 31 ? ? ? ? ? ? ?40 > zfs ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 83 ? ? ? ? ? ? ?79 > > > *** degraded mirror ? ? ? ? ? ? write MB/s ? ? ?read MB/s > mirror.eli.journal.ufs2 16 ? ? ? ? ? ? ?9 > eli.zfs ? ? ? ? ? ? ? ? ? ? ? ? 56 ? ? ? ? ? ? ?40 > zfs ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 86 ? ? ? ? ? ? ?71 > > ****************************************************Thanks a lot for your numbers, the relevant part for me was this: *** mirror write MB/s read MB/s eli.zfs 31 40 zfs 83 79 *** degraded mirror write MB/s read MB/s eli.zfs 56 40 zfs 86 71 31 mb/s writes and 40 mb/s reads is something that I guess I could potentially live with. I am guessing the main problem of stacking ZFS on top of geli like this is the fact that writing to a mirror requires double the CPU use, because we have to encrypt all written data twice (once to each disk) instead of encrypting first and then writing the encrypted data to 2 disks as would be the case if we had crypto sitting on top of ZFS instead of ZFS sitting on top of crypto. I now have to reevaluate my planned use of an SSD though, I was planning to use a 40gb partition on an Intel 80GB X25-M G2 as a dedicated L2ARC device for a ZFS mirror of 2 x 2tb disks. However these numbers make it quite obvious that I would already be CPU-starved at 40-50mb/s throughput on the encrypted ZFS mirror, so adding an l2arc SSD, while improving latency, would do really nothing for actual disk read speeds, considering the l2arc itself would too, have to sit on top of a GELI device. - Sincerely, Dan Naumov