Gary Palmer
2017-Oct-20 16:51 UTC
System fails to boot with zfs root on nvme mirror when not using full disks
On Fri, Oct 20, 2017 at 05:13:45PM +0200, Markus Wild wrote:> Hello list, > > I have a particularly odd problem, where I can't figure out what's going on, or whether I'm possibly doing something > stupid... > > Short summary: > - Supermicro X10DRI-T motherboard, using UEFI boot, 128G RAM, 2 CPUs > - 2 Intel DC P3700 NVME cards, to be used as mirrored zfs root and mirrored log devices for a data pool > - FreeBSD-11.1 release installs fine off memstick, and the built system boots correctly, but this system uses the entire > disks (so I need to shrink the system to make room for log partitions) > - so I then rebooted into usb live system, and did a gpart backup of the nvme drives (see later). zfs snapshot -r, zfs > send -R > backup, gpart delete last index and recreate with shorter size on both drives, recreate zroot pool with > correct ashift, restore with zfs receive from backup, set bootfs, reboot > - the rebooting system bootloader finds the zroot pool correctly, and proceeds to load the kernel. However, when it's > supposed to mount the root filesystem, I get: > Trying to mount root from zfs:zroot/ROOT/default []... > Mounting from zfs:zroot/ROOT/default failed with error 6. > - when I list the available boot devices, all partitions of the nvme disks are listedMy suspicion is that you didn't make a backup of the new /boot/zfs/zpool.cache>From the USB live booted system, when you recreated the zpool youlikely changed parameters that live in the cache. You need to copy that to the new boot pool or ZFS can get angry Just a suspicion Regards, Gary
Markus Wild
2017-Oct-22 13:03 UTC
System fails to boot with zfs root on nvme mirror when not using full disks
Hello Gary thanks for your input!> My suspicion is that you didn't make a backup of the new > /boot/zfs/zpool.cache > > From the USB live booted system, when you recreated the zpool you > likely changed parameters that live in the cache. You need to copy > that to the new boot pool or ZFS can get angryIt's a possibility... new attempt from usb live system: gpart delete -i 3 nvd0 gpart delete -i 3 nvd1 gpart add -a 4k -s 50G -t freebsd-zfs -l zfs0 nvd0 gpart add -a 4k -s 50G -t freebsd-zfs -l zfs1 nvd1 sysctl vfs.zfs.min_auto_ashift=12 zpool create -m none -o cachefile=/var/tmp/zpool.cache zroot mirror /dev/gpt/zfs0 /dev/gpt/zfs1 zfs receive -Fud zroot < zroot.zfs zpool set bootfs=zroot/ROOT/default zroot mount -t zfs zroot/ROOT/default /mnt cp /var/tmp/zpool.cache /mnt/boot/zfs/zpool.cache umount /mnt reboot Unfortunately, this didn't change anything. I'm still getting the Trying to mount root from zfs:zroot/ROOT/default []... Mounting from zfs:zroot/ROOT/default failed with error 6. error :( Any other ideas....? Cheers, Markus