Nikos Vassiliadis
2017-May-15 18:11 UTC
zpool imported twice with different names (was Re: Fwd: ZFS)
Fix the e-mail subject On 05/15/2017 08:09 PM, Nikos Vassiliadis wrote:> Hi everybody, > > While trying to rename a zpool from zroot to vega, > I ended up in this strange situation: > nik at vega:~ % zfs list -t all > NAME USED AVAIL REFER MOUNTPOINT > vega 1.83G 34.7G 96K /zroot > vega/ROOT 1.24G 34.7G 96K none > vega/ROOT/default 1.24G 34.7G 1.24G / > vega/tmp 120K 34.7G 120K /tmp > vega/usr 608M 34.7G 96K /usr > vega/usr/home 136K 34.7G 136K /usr/home > vega/usr/ports 96K 34.7G 96K /usr/ports > vega/usr/src 607M 34.7G 607M /usr/src > vega/var 720K 34.7G 96K /var > vega/var/audit 96K 34.7G 96K /var/audit > vega/var/crash 96K 34.7G 96K /var/crash > vega/var/log 236K 34.7G 236K /var/log > vega/var/mail 100K 34.7G 100K /var/mail > vega/var/tmp 96K 34.7G 96K /var/tmp > zroot 1.83G 34.7G 96K /zroot > zroot/ROOT 1.24G 34.7G 96K none > zroot/ROOT/default 1.24G 34.7G 1.24G / > zroot/tmp 120K 34.7G 120K /tmp > zroot/usr 608M 34.7G 96K /usr > zroot/usr/home 136K 34.7G 136K /usr/home > zroot/usr/ports 96K 34.7G 96K /usr/ports > zroot/usr/src 607M 34.7G 607M /usr/src > zroot/var 724K 34.7G 96K /var > zroot/var/audit 96K 34.7G 96K /var/audit > zroot/var/crash 96K 34.7G 96K /var/crash > zroot/var/log 240K 34.7G 240K /var/log > zroot/var/mail 100K 34.7G 100K /var/mail > zroot/var/tmp 96K 34.7G 96K /var/tmp > nik at vega:~ % zpool status > pool: vega > state: ONLINE > scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 2017 > config: > > NAME STATE READ WRITE CKSUM > vega ONLINE 0 0 0 > vtbd0p3 ONLINE 0 0 0 > > errors: No known data errors > > pool: zroot > state: ONLINE > scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 2017 > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > vtbd0p3 ONLINE 0 0 0 > > errors: No known data errors > nik at vega:~ % > ------------------------------------------- > > It seems like there are two pools, sharing the same vdev... > > After running a few commands in this state, like doing a scrub, > the pool was (most probably) destroyed. It couldn't boot anymore > and I didn't research further. Is this a known bug? > > Steps to reproduce: > install FreeBSD-11.0 in a pool named zroot > reboot into a live-CD > zpool import -f zroot vega > reboot again > > Thanks, > Nikos > > PS: > Sorry for the cross-posting, I am doing this to share to more people > because it is a rather easy way to destroy a ZFS pool. >
Trond Endrestøl
2017-May-16 06:31 UTC
zpool imported twice with different names (was Re: Fwd: ZFS)
On Mon, 15 May 2017 20:11+0200, Nikos Vassiliadis wrote:> Fix the e-mail subject > > On 05/15/2017 08:09 PM, Nikos Vassiliadis wrote: > > Hi everybody, > > > > While trying to rename a zpool from zroot to vega, > > I ended up in this strange situation: > > nik at vega:~ % zfs list -t all > > NAME USED AVAIL REFER MOUNTPOINT > > vega 1.83G 34.7G 96K /zroot > > vega/ROOT 1.24G 34.7G 96K none > > vega/ROOT/default 1.24G 34.7G 1.24G / > > vega/tmp 120K 34.7G 120K /tmp > > vega/usr 608M 34.7G 96K /usr > > vega/usr/home 136K 34.7G 136K /usr/home > > vega/usr/ports 96K 34.7G 96K /usr/ports > > vega/usr/src 607M 34.7G 607M /usr/src > > vega/var 720K 34.7G 96K /var > > vega/var/audit 96K 34.7G 96K /var/audit > > vega/var/crash 96K 34.7G 96K /var/crash > > vega/var/log 236K 34.7G 236K /var/log > > vega/var/mail 100K 34.7G 100K /var/mail > > vega/var/tmp 96K 34.7G 96K /var/tmp > > zroot 1.83G 34.7G 96K /zroot > > zroot/ROOT 1.24G 34.7G 96K none > > zroot/ROOT/default 1.24G 34.7G 1.24G / > > zroot/tmp 120K 34.7G 120K /tmp > > zroot/usr 608M 34.7G 96K /usr > > zroot/usr/home 136K 34.7G 136K /usr/home > > zroot/usr/ports 96K 34.7G 96K /usr/ports > > zroot/usr/src 607M 34.7G 607M /usr/src > > zroot/var 724K 34.7G 96K /var > > zroot/var/audit 96K 34.7G 96K /var/audit > > zroot/var/crash 96K 34.7G 96K /var/crash > > zroot/var/log 240K 34.7G 240K /var/log > > zroot/var/mail 100K 34.7G 100K /var/mail > > zroot/var/tmp 96K 34.7G 96K /var/tmp > > nik at vega:~ % zpool status > > pool: vega > > state: ONLINE > > scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 2017 > > config: > > > > NAME STATE READ WRITE CKSUM > > vega ONLINE 0 0 0 > > vtbd0p3 ONLINE 0 0 0 > > > > errors: No known data errors > > > > pool: zroot > > state: ONLINE > > scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 2017 > > config: > > > > NAME STATE READ WRITE CKSUM > > zroot ONLINE 0 0 0 > > vtbd0p3 ONLINE 0 0 0 > > > > errors: No known data errors > > nik at vega:~ % > > ------------------------------------------- > > > > It seems like there are two pools, sharing the same vdev... > > > > After running a few commands in this state, like doing a scrub, > > the pool was (most probably) destroyed. It couldn't boot anymore > > and I didn't research further. Is this a known bug? > >I guess you had a /boot/zfs/zpool.cache file referring to the original zroot pool. Next, the kernel found the vega pool and didn't realise these two pools are the very same.> > Steps to reproduce: > > install FreeBSD-11.0 in a pool named zroot > > reboot into a live-CDRedo the above steps.> > zpool import -f zroot vegaDo these four commands instead of a regular import: mkdir /tmp/vega zpool import -N -f -o cachefile=/tmp/zpool.cache vega mount -t zfs vega/ROOT/default /tmp/vega cp -p /tmp/zpool.cache /tmp/vega/boot/zfs/zpool.cache> > reboot againReboot again.> > > > Thanks, > > Nikos > > > > PS: > > Sorry for the cross-posting, I am doing this to share to more people > > because it is a rather easy way to destroy a ZFS pool.-- +-------------------------------+------------------------------------+ | Vennlig hilsen, | Best regards, | | Trond Endrest?l, | Trond Endrest?l, | | IT-ansvarlig, | System administrator, | | Fagskolen Innlandet, | Gj?vik Technical College, Norway, | | tlf. mob. 952 62 567, | Cellular...: +47 952 62 567, | | sentralbord 61 14 54 00. | Switchboard: +47 61 14 54 00. | +-------------------------------+------------------------------------+
Fabian Keil
2017-May-16 15:08 UTC
zpool imported twice with different names (was Re: Fwd: ZFS)
Nikos Vassiliadis <nvass at gmx.com> wrote:> On 05/15/2017 08:09 PM, Nikos Vassiliadis wrote: > > Hi everybody, > > > > While trying to rename a zpool from zroot to vega, > > I ended up in this strange situation: > > nik at vega:~ % zfs list -t all > > NAME USED AVAIL REFER MOUNTPOINT > > vega 1.83G 34.7G 96K /zroot > > vega/ROOT 1.24G 34.7G 96K none > > vega/ROOT/default 1.24G 34.7G 1.24G / > > vega/tmp 120K 34.7G 120K /tmp > > vega/usr 608M 34.7G 96K /usr > > vega/usr/home 136K 34.7G 136K /usr/home > > vega/usr/ports 96K 34.7G 96K /usr/ports > > vega/usr/src 607M 34.7G 607M /usr/src > > vega/var 720K 34.7G 96K /var > > vega/var/audit 96K 34.7G 96K /var/audit > > vega/var/crash 96K 34.7G 96K /var/crash > > vega/var/log 236K 34.7G 236K /var/log > > vega/var/mail 100K 34.7G 100K /var/mail > > vega/var/tmp 96K 34.7G 96K /var/tmp > > zroot 1.83G 34.7G 96K /zroot > > zroot/ROOT 1.24G 34.7G 96K none > > zroot/ROOT/default 1.24G 34.7G 1.24G / > > zroot/tmp 120K 34.7G 120K /tmp > > zroot/usr 608M 34.7G 96K /usr > > zroot/usr/home 136K 34.7G 136K /usr/home > > zroot/usr/ports 96K 34.7G 96K /usr/ports > > zroot/usr/src 607M 34.7G 607M /usr/src > > zroot/var 724K 34.7G 96K /var > > zroot/var/audit 96K 34.7G 96K /var/audit > > zroot/var/crash 96K 34.7G 96K /var/crash > > zroot/var/log 240K 34.7G 240K /var/log > > zroot/var/mail 100K 34.7G 100K /var/mail > > zroot/var/tmp 96K 34.7G 96K /var/tmp > > nik at vega:~ % zpool status > > pool: vega > > state: ONLINE > > scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 > > 2017 config: > > > > NAME STATE READ WRITE CKSUM > > vega ONLINE 0 0 0 > > vtbd0p3 ONLINE 0 0 0 > > > > errors: No known data errors > > > > pool: zroot > > state: ONLINE > > scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 01:28:48 > > 2017 config: > > > > NAME STATE READ WRITE CKSUM > > zroot ONLINE 0 0 0 > > vtbd0p3 ONLINE 0 0 0 > > > > errors: No known data errors > > nik at vega:~ % > > ------------------------------------------- > > > > It seems like there are two pools, sharing the same vdev... > > > > After running a few commands in this state, like doing a scrub, > > the pool was (most probably) destroyed. It couldn't boot anymore > > and I didn't research further. Is this a known bug? > > > > Steps to reproduce: > > install FreeBSD-11.0 in a pool named zroot > > reboot into a live-CD > > zpool import -f zroot vegaWhy did you use the -f flag? Unless you can reproduce the problem without it, it's not obvious to me that this is a bug. Fabian -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20170516/d787fac8/attachment.sig>