Hi all,
I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the
following command:
# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0
c5t6d0
It worked fine, but I was slightly confused by the size yield (99 GB vs the
116 GB I had on my other RAID-Z1 pool of same-sized disks).
I thought one of the disks might have been to blame, so I tried swapping it
out - it turned out my replacement disk was a dud (zpool wasn''t happy
about
that, and eventually offlined the disk). Oh well, swap the old one back in,
no harm done.
Reboot, and ZFS informs me that I''m missing another, unrelated disk
(c5t1d0
was the one I tried swapping out unsuccessfully - c5t3d0 is the one it
complained about, which I had swapped for another disk before any problems
began or any data was in the pool, with no problems - ZFS scrubbed and was
happy).
It continually claimed the device was unavailable and so the pool was in
degraded mode - attempting to replace the disk with itself yielded an error
about the disk being in use by the same pool which claimed the disk was
unavailable. Unmount the pool, same error persists, zpool replace continues
to give that error, despite repeated zpool offline magicant c5t3d0 followed
by zpool online [etc].
I try exporting and re-importing the pool - the export went fine. The import
threw the confusing error which is the point of this email:
# zpool import
pool: magicant
id: 3232403590553596936
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-6X
config:
magicant UNAVAIL missing device
raidz1 ONLINE
c5t0d0 ONLINE
c5t1d0 ONLINE
c5t2d0 ONLINE
c5t3d0 ONLINE
c5t4d0 ONLINE
c5t5d0 ONLINE
c5t6d0 ONLINE
Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
So, to summarize:
7-scsi-disk raidz1 zpool is created
c5t3d0 is swapped out for another disk of identical size, zfs is happy and
functions fine after a scrub
c5t1d0 is swapped out for another disk of identical size (which happened to
be a dud), Solaris didn''t like that, so I put the original back in and
rebooted
On boot, zpool claims c5t3d0 is unavailable, while format and cfgadm both
agree that the disk still exists and is dandy. zpool replace pool c5t3d0
c5t3d0 claims it''s in use by that pool, zpool offline pool c5t3d0
followed
by zpool online pool c5t3d0 doesn''t help. zpool export pool worked, but
then
zpool import pool threw the above error.
Is this a bug, or am I missing something obvious?
snv 44, x86.
- Rich
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20061030/55a9bc5d/attachment.html>
Matthew Ahrens
2006-Nov-01 00:48 UTC
[zfs-discuss] ZFS thinks my 7-disk pool has imaginary disks
Rince wrote:> Hi all, > > I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using > the following command: > > # zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 > c5t6d0 > > It worked fine, but I was slightly confused by the size yield (99 GB vs > the 116 GB I had on my other RAID-Z1 pool of same-sized disks).This is probably because your old pool was hitting 6288488 du reports misleading size on RAID-Z Pools created with more recent bits won''t hit this. (note, 99/116 == 6/7) --matt