John de Sousa
2008-Dec-12 15:58 UTC
[zfs-discuss] HELP! - Re-use a disk with UFS that previously was part of a ZFS pool
Hello,
I built one of my servers (V120) with a zfs root to see how it works, now when I
try to rebuild and re-use the two internal disks using slices/sds and ufs I can
still see that there are references to the old zpool called rpool. When I try to
destroy the old pool I get core dups. Does anyone have any idea how I resolve
this?
Many thanks,
John
ok boot rootmirror
Resetti
LOM event: +6d+5h2m18s host reset
ng ...
ChassisSerialNumber UNDEFINED
Sun Fire V120 (UltraSPARC-IIe 648MHz), No Keyboard
OpenBoot 4.0, 3072 MB memory installed, Serial #55470796.
Ethernet address 0:3:ba:4e:6a:cc, Host ID: 834e6acc.
Executing last command: boot rootmirror
Boot device: /pci at 1f,0/pci at 1/scsi at 8/disk at 1,0 File and args:
SunOS Release 5.10 Version Generic_137137-09 64-bit
Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: ted
SUNW,eri0 : 100 Mbps full duplex link up
/dev/md/rdsk/d40 is clean
/dev/md/rdsk/d50 is clean
Reading ZFS config: done.
ted console login: root
Password:
Nov 9 16:56:49 ted login: ROOT LOGIN /dev/console
Last login: Sun Nov 9 16:14:35 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool - - - - FAULTED -
# zpool status rpool
pool: rpool
state: UNAVAIL
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool UNAVAIL 0 0 0 insufficient replicas
mirror UNAVAIL 0 0 0 corrupted data
c1t0d0s0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
# df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d10 9.8G 4.3G 5.5G 44% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 4.2G 1.5M 4.2G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
fd 0K 0K 0K 0% /dev/fd
swap 4.2G 32K 4.2G 1% /tmp
swap 4.2G 48K 4.2G 1% /var/run
/dev/md/dsk/d50 480M 1.0M 431M 1% /globaldevices
/dev/md/dsk/d40 11G 11M 11G 1% /export/home
# metastat -c
d50 m 512MB d51
d51 s 512MB c1t1d0s5
d40 m 11GB d41
d41 s 11GB c1t1d0s4
d30 m 10GB d31
d31 s 10GB c1t1d0s3
d20 m 2.0GB d21
d21 s 2.0GB c1t1d0s1
d10 m 10GB d11
d11 s 10GB c1t1d0s0
# zpool export rpool
internal error: Invalid argument
Abort - core dumped
# zfs destroy -f rpool
internal error: Invalid argument
Abort - core dumped
#
--
This message posted from opensolaris.org
Richard Elling
2008-Dec-12 16:10 UTC
[zfs-discuss] HELP! - Re-use a disk with UFS that previously was part of a ZFS pool
John de Sousa wrote:> Hello, > > I built one of my servers (V120) with a zfs root to see how it works, now when I try to rebuild and re-use the two internal disks using slices/sds and ufs I can still see that there are references to the old zpool called rpool. When I try to destroy the old pool I get core dups. Does anyone have any idea how I resolve this? >It depends on if you think there is a problem. ZFS writes its information on different areas than UFS. By simply creating one file system over the other, the identifying metadata is not likely to be overwritten. If you really wanted to have a clean slate, then you would need to clean the slate, so to speak. Please file bugs against zpool and zfs for dumping core. Please attach the cores to the bug. -- richard> Many thanks, > > John > > > ok boot rootmirror > Resetti > LOM event: +6d+5h2m18s host reset > ng ... > > ChassisSerialNumber UNDEFINED > Sun Fire V120 (UltraSPARC-IIe 648MHz), No Keyboard > OpenBoot 4.0, 3072 MB memory installed, Serial #55470796. > Ethernet address 0:3:ba:4e:6a:cc, Host ID: 834e6acc. > > > > Executing last command: boot rootmirror > Boot device: /pci at 1f,0/pci at 1/scsi at 8/disk at 1,0 File and args: > SunOS Release 5.10 Version Generic_137137-09 64-bit > Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved. > Use is subject to license terms. > Hostname: ted > SUNW,eri0 : 100 Mbps full duplex link up > /dev/md/rdsk/d40 is clean > /dev/md/rdsk/d50 is clean > Reading ZFS config: done. > > ted console login: root > Password: > Nov 9 16:56:49 ted login: ROOT LOGIN /dev/console > Last login: Sun Nov 9 16:14:35 on console > Sun Microsystems Inc. SunOS 5.10 Generic January 2005 > # zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > rpool - - - - FAULTED - > # zpool status rpool > pool: rpool > state: UNAVAIL > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > rpool UNAVAIL 0 0 0 insufficient replicas > mirror UNAVAIL 0 0 0 corrupted data > c1t0d0s0 ONLINE 0 0 0 > c1t1d0s0 ONLINE 0 0 0 > # df -h > Filesystem size used avail capacity Mounted on > /dev/md/dsk/d10 9.8G 4.3G 5.5G 44% / > /devices 0K 0K 0K 0% /devices > ctfs 0K 0K 0K 0% /system/contract > proc 0K 0K 0K 0% /proc > mnttab 0K 0K 0K 0% /etc/mnttab > swap 4.2G 1.5M 4.2G 1% /etc/svc/volatile > objfs 0K 0K 0K 0% /system/object > sharefs 0K 0K 0K 0% /etc/dfs/sharetab > fd 0K 0K 0K 0% /dev/fd > swap 4.2G 32K 4.2G 1% /tmp > swap 4.2G 48K 4.2G 1% /var/run > /dev/md/dsk/d50 480M 1.0M 431M 1% /globaldevices > /dev/md/dsk/d40 11G 11M 11G 1% /export/home > # metastat -c > d50 m 512MB d51 > d51 s 512MB c1t1d0s5 > d40 m 11GB d41 > d41 s 11GB c1t1d0s4 > d30 m 10GB d31 > d31 s 10GB c1t1d0s3 > d20 m 2.0GB d21 > d21 s 2.0GB c1t1d0s1 > d10 m 10GB d11 > d11 s 10GB c1t1d0s0 > # zpool export rpool > internal error: Invalid argument > Abort - core dumped > # zfs destroy -f rpool > internal error: Invalid argument > Abort - core dumped > # >