Dewayne Geraghty
2020-Nov-06 05:46 UTC
Has geli broken when using authentication (hmac/sha256)?
Using FreeBSD 12.2S r367125M, to
# geli init -a HMAC/SHA256 -e aes-cbc -l 128 -P -s 4096 -K /tmp/key ${D}s1a
fails during newfs,
# newfs -O2 -U ${D}s1a.eli
newfs: can't read old UFS1 superblock: read error from block device:
Invalid argument
Using geli with encryption only, works as usual. But using hmac/sha256
fails when used with "-e null" or in combination with a cipher.
Using encryption only, everything is normal, ie newfs ok, the filesystem
mounts and is accessible.
Could someone verify if something is broken? I've included my test case
below:
--
Reproducible with:
D=/dev/md0
# Cleanup previous runs
umount /mnt/A
geli detach ${D}s1a || true
mdconfig -du 0 || TRUE
rm /tmp/test || true
truncate -s 64m /tmp/test
mdconfig -t vnode -f /tmp/test
gpart create -s MBR ${D}
gpart add -a 4k -s 14m -t freebsd $D
gpart add -a 4k -s 10m -t freebsd $D
gpart add -a 4k -s 10m -t freebsd $D
gpart create -s bsd ${D}s1
gpart create -s bsd ${D}s2
gpart add -a 4k -s 10m -t freebsd-ufs ${D}s1
openssl rand -hex -out /tmp/key 32
geli init -a HMAC/SHA256 -e aes-cbc -l 128 -P -s 4096 -K /tmp/key ${D}s1a
geli attach -p -k /tmp/key ${D}s1a
newfs -O2 -U ${D}s1a.eli
/dev/md0s1a.eli: 8.9MB (18200 sectors) block size 32768, fragment size 4096
using 4 cylinder groups of 2.25MB, 72 blks, 384 inodes.
with soft updates
newfs: can't read old UFS1 superblock: read error from block device:
Invalid argument
However using UFS1, newfs succeeds but the mount fails.
newfs -O1 -U ${D}s1a.eli
/dev/md0s1a.eli: 8.9MB (18200 sectors) block size 32768, fragment size 4096
using 4 cylinder groups of 2.25MB, 72 blks, 512 inodes.
with soft updates
super-block backups (for fsck_ffs -b #) at:
64, 4672, 9280, 13888
# mount -v /dev/md0s1a.eli /mnt/A
mount: /dev/md0s1a.eli: Invalid argument
The only change that may be related is:
# svnlite log -l 4 /usr/src/tests/sys/geom/class/eli
------------------------------------------------------------------------
r363486 | asomers | 2020-07-25 04:19:25 +1000 (Sat, 25 Jul 2020) | 13 lines
MFC r363014:
geli: enable direct dispatch
geli does all of its crypto operations in a separate thread pool, so
g_eli_start, g_eli_read_done, and g_eli_write_done don't actually do very
much work. Enabling direct dispatch eliminates the g_up/g_down bottlenecks,
doubling IOPs on my system. This change does not affect the thread pool.
Reviewed by: markj
Sponsored by: Axcient
Differential Revision: https://reviews.freebsd.org/D25587
Cheers, Dewayne
--
*** NOTICE This email and any attachments may contain legally privileged
or confidential information and may be protected by copyright. You must
not use or disclose them other than for the purposes for which they were
supplied. The privilege or confidentiality attached to this message and
attachments is not waived by reason of mistaken delivery to you. If you
are not the intended recipient, you must not use, disclose, retain,
forward or reproduce this message or any attachments. If you receive
this message in error please notify the sender by return email or
telephone and destroy and delete all copies. ***
John-Mark Gurney
2020-Nov-07 10:06 UTC
Has geli broken when using authentication (hmac/sha256)?
Dewayne Geraghty wrote this message on Fri, Nov 06, 2020 at 16:46 +1100:> Using FreeBSD 12.2S r367125M, to > # geli init -a HMAC/SHA256 -e aes-cbc -l 128 -P -s 4096 -K /tmp/key ${D}s1a > fails during newfs, > # newfs -O2 -U ${D}s1a.eli > newfs: can't read old UFS1 superblock: read error from block device: > Invalid argument > > Using geli with encryption only, works as usual. But using hmac/sha256 > fails when used with "-e null" or in combination with a cipher. > > Using encryption only, everything is normal, ie newfs ok, the filesystem > mounts and is accessible. > > Could someone verify if something is broken? I've included my test case > below:What happens if you zero out the device first: dd if=/dev/zero of=${D}s1a.eli bs=1m If it's large, you likely only need to set the count to 1 or 2... newfs is likely trying to read make sure there aren't any old file systems there, but geli init doesn't write new data, so any reads will fail... Note that the geli man page says: It is recommended to write to the whole provider before first use, in order to make sure that all sectors and their corresponding checksums are properly initialized into a consistent state. One can safely ignore data authentication errors that occur immediately after the first time a provider is attached and before it is initialized in this way. Also, are you sure this worked BEFORE the changes? Because those changes shouldn't have caused this failure...> openssl rand -hex -out /tmp/key 32 > geli init -a HMAC/SHA256 -e aes-cbc -l 128 -P -s 4096 -K /tmp/key ${D}s1a > geli attach -p -k /tmp/key ${D}s1aI don't see a write here...> newfs -O2 -U ${D}s1a.eli > /dev/md0s1a.eli: 8.9MB (18200 sectors) block size 32768, fragment size 4096 > using 4 cylinder groups of 2.25MB, 72 blks, 384 inodes. > with soft updates > newfs: can't read old UFS1 superblock: read error from block device: > Invalid argument > > However using UFS1, newfs succeeds but the mount fails. > > newfs -O1 -U ${D}s1a.eli > /dev/md0s1a.eli: 8.9MB (18200 sectors) block size 32768, fragment size 4096 > using 4 cylinder groups of 2.25MB, 72 blks, 512 inodes. > with soft updates > super-block backups (for fsck_ffs -b #) at: > 64, 4672, 9280, 13888 > # mount -v /dev/md0s1a.eli /mnt/A > mount: /dev/md0s1a.eli: Invalid argumentThis is likely trying to read a UFS v2 super block, failing, and not trying other locations...> The only change that may be related is: > > # svnlite log -l 4 /usr/src/tests/sys/geom/class/eli > ------------------------------------------------------------------------ > r363486 | asomers | 2020-07-25 04:19:25 +1000 (Sat, 25 Jul 2020) | 13 lines > > MFC r363014: > > geli: enable direct dispatch > > geli does all of its crypto operations in a separate thread pool, so > g_eli_start, g_eli_read_done, and g_eli_write_done don't actually do very > much work. Enabling direct dispatch eliminates the g_up/g_down bottlenecks, > doubling IOPs on my system. This change does not affect the thread pool. > > Reviewed by: markj > Sponsored by: Axcient > Differential Revision: https://reviews.freebsd.org/D25587 > > Cheers, Dewayne > > -- > *** NOTICE This email and any attachments may contain legally privileged > or confidential information and may be protected by copyright. You must > not use or disclose them other than for the purposes for which they were > supplied. The privilege or confidentiality attached to this message and > attachments is not waived by reason of mistaken delivery to you. If you > are not the intended recipient, you must not use, disclose, retain, > forward or reproduce this message or any attachments. If you receive > this message in error please notify the sender by return email or > telephone and destroy and delete all copies. *** > _______________________________________________ > freebsd-stable at freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"-- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."