Rainer Orth
2010-Aug-27 12:27 UTC
[zfs-discuss] zpool status and format/kernel disagree about root disk
For quite some time I''m bitten by the fact that on my laptop (currently running self-built snv_147) zpool status rpool and format disagree about the device name of the root disk: ro at masaya 14 > zpool status rpool pool: rpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using ''zpool upgrade''. Once this is done, the pool will no longer be accessible on older software versions. scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t0d0s3 ONLINE 0 0 0 errors: No known data errors root at masaya 3 # format -e Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d134583970 <drive type unknown> /pci at 0,0/pci8086,2448 at 1e/pci17aa,20c8 at 0,2/blkdev at 0 1. c11t0d0 <ATA -ST9160821AS -C cyl 19454 alt 2 hd 255 sec 63> /pci at 0,0/pci17aa,20a7 at 1f,2/disk at 0,0 Specify disk (enter its number): zpool status thinks rpool is on c1t0d0s3, while format (and the kernel) correctly believe it''s c11t0d0(s3) instead. This has the unfortunate consequence that beadm activate <newbe> fails in a quite non-obvious way. Running it under truss, I find that it invokes installgrub, which fails. The manual equivalent is root at masaya 266 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 /dev/rdsk/c1t0d0s3 cannot read MBR on /dev/rdsk/c1t0d0p0 open: No such file or directory root at masaya 267 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 /dev/rdsk/c11t0d0s3 stage1 written to partition 0 sector 0 (abs 16065) stage2 written to partition 0, 273 sectors starting at 50 (abs 16115) For the time being, I''m working around this by replacing installgrub by a script, but obviously this shouldn''t happen and the problem isn''t easy to find. I thought I''d seen a zfs CR for this, but cannot find it right now, especially with search on bugs.opensolaris.org being only partially functional. Any suggestions? Thanks. Rainer -- ----------------------------------------------------------------------------- Rainer Orth, Center for Biotechnology, Bielefeld University
LaoTsao 老曹
2010-Aug-27 12:36 UTC
[zfs-discuss] zpool status and format/kernel disagree about root disk
hi may be boot a livecd then export and import the zpool? regards On 8/27/2010 8:27 AM, Rainer Orth wrote:> For quite some time I''m bitten by the fact that on my laptop (currently > running self-built snv_147) zpool status rpool and format disagree about > the device name of the root disk: > > ro at masaya 14> zpool status rpool > pool: rpool > state: ONLINE > status: The pool is formatted using an older on-disk format. The pool can > still be used, but some features are unavailable. > action: Upgrade the pool using ''zpool upgrade''. Once this is done, the > pool will no longer be accessible on older software versions. > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > c1t0d0s3 ONLINE 0 0 0 > > errors: No known data errors > > root at masaya 3 # format -e > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c3t0d134583970<drive type unknown> > /pci at 0,0/pci8086,2448 at 1e/pci17aa,20c8 at 0,2/blkdev at 0 > 1. c11t0d0<ATA -ST9160821AS -C cyl 19454 alt 2 hd 255 sec 63> > /pci at 0,0/pci17aa,20a7 at 1f,2/disk at 0,0 > Specify disk (enter its number): > > zpool status thinks rpool is on c1t0d0s3, while format (and the kernel) > correctly believe it''s c11t0d0(s3) instead. > > This has the unfortunate consequence that beadm activate<newbe> fails > in a quite non-obvious way. > > Running it under truss, I find that it invokes installgrub, which > fails. The manual equivalent is > > root at masaya 266 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 /dev/rdsk/c1t0d0s3 > cannot read MBR on /dev/rdsk/c1t0d0p0 > open: No such file or directory > root at masaya 267 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 /dev/rdsk/c11t0d0s3 > stage1 written to partition 0 sector 0 (abs 16065) > stage2 written to partition 0, 273 sectors starting at 50 (abs 16115) > > For the time being, I''m working around this by replacing installgrub by > a script, but obviously this shouldn''t happen and the problem isn''t easy > to find. > > I thought I''d seen a zfs CR for this, but cannot find it right now, > especially with search on bugs.opensolaris.org being only partially > functional. > > Any suggestions? > > Thanks. > Rainer >-------------- next part -------------- A non-text attachment was scrubbed... Name: laotsao.vcf Type: text/x-vcard Size: 221 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100827/f079c2a3/attachment.vcf>
Mark J Musante
2010-Aug-27 12:44 UTC
[zfs-discuss] zpool status and format/kernel disagree about root disk
On Fri, 27 Aug 2010, Rainer Orth wrote:> zpool status thinks rpool is on c1t0d0s3, while format (and the kernel) > correctly believe it''s c11t0d0(s3) instead. > > Any suggestions?Try removing the symlinks or using ''devfsadm -C'' as suggested here: https://defect.opensolaris.org/bz/show_bug.cgi?id=14999
Rainer Orth
2010-Aug-27 12:45 UTC
[zfs-discuss] zpool status and format/kernel disagree about root disk
LaoTsao ?? <laotsao at gmail.com> writes:> may be boot a livecd then export and import the zpool?I''ve already tried all sorts of contortions to regenerate /etc/path_to_inst to no avail. This is simply a case of `should not happen''. Rainer -- ----------------------------------------------------------------------------- Rainer Orth, Center for Biotechnology, Bielefeld University
Rainer Orth
2010-Aug-27 13:18 UTC
[zfs-discuss] zpool status and format/kernel disagree about root disk
Mark J Musante <Mark.Musante at oracle.com> writes:> On Fri, 27 Aug 2010, Rainer Orth wrote: >> zpool status thinks rpool is on c1t0d0s3, while format (and the kernel) >> correctly believe it''s c11t0d0(s3) instead. >> >> Any suggestions? > > Try removing the symlinks or using ''devfsadm -C'' as suggested here: > > https://defect.opensolaris.org/bz/show_bug.cgi?id=14999devfsadm -C alone didn''t make a difference, but clearing out /dev/*dsk and running devfsadm -Cv did help. Thanks a lot. Rainer -- ----------------------------------------------------------------------------- Rainer Orth, Center for Biotechnology, Bielefeld University
Sean Sprague
2010-Aug-27 14:12 UTC
[zfs-discuss] zpool status and format/kernel disagree about root disk
Rainer,> devfsadm -C alone didn''t make a difference, but clearing out /dev/*dsk > and running devfsadm -Cv did help. >I am glad it helped; but removing anything from /dev/*dsk is a kludge that cannot be accepted/condoned/supported. Regards... Sean.
Rainer Orth
2010-Aug-27 14:43 UTC
[zfs-discuss] zpool status and format/kernel disagree about root disk
Sean,> I am glad it helped; but removing anything from /dev/*dsk is a kludge that > cannot be accepted/condoned/supported.no doubt about this: two parts of the kernel (zfs vs. devfs?) disagreeing about devices mustn''t happen. Rainer -- ----------------------------------------------------------------------------- Rainer Orth, Center for Biotechnology, Bielefeld University
Cindy Swearingen
2010-Aug-27 15:27 UTC
[zfs-discuss] zpool status and format/kernel disagree about root disk
Hi Rainer, I''m no device expert but we see this problem when firmware updates or other device/controller changes change the device ID associated with the devices in the pool. In general, ZFS can handle controller/device changes if the driver generates or fabricates device IDs. You can view device IDs with this command: # zdb -l /dev/dsk/cvtxdysz If you are unsure what impact device changes will have your pool, then export the pool first. If you see the device ID has changed when the pool is exported (use prtconf -v to view device IDs while the pool is exported) with the hardware change, then the resulting pool behavior is unknown. Importing the root pool is more complex but would probably prevent this from happening again. Thanks, Cindy On 08/27/10 08:43, Rainer Orth wrote:> Sean, > >> I am glad it helped; but removing anything from /dev/*dsk is a kludge that >> cannot be accepted/condoned/supported. > > no doubt about this: two parts of the kernel (zfs vs. devfs?) disagreeing > about devices mustn''t happen. > > Rainer >
Rainer Orth
2010-Aug-27 16:13 UTC
[zfs-discuss] zpool status and format/kernel disagree about root disk
Hi Cindy, I''ll investigate more next week since I''m in a hurry to leave, but one point now:> I''m no device expert but we see this problem when firmware updates or > other device/controller changes change the device ID associated with > the devices in the pool.This is the internal disk in a laptop, so no device or controller change should happen here and cause a rename from c1d0d0 to c11t0d0. Rainer -- ----------------------------------------------------------------------------- Rainer Orth, Center for Biotechnology, Bielefeld University
Hi, Is it possible to do "zfs get -??? quota filesystem" ? Thanks. Fred
I get the answer: -p.> -----Original Message----- > From: Fred Liu > Sent: ???, ?? 28, 2010 9:00 > To: zfs-discuss at opensolaris.org > Subject: get quota showed in precision of byte? > > Hi, > > Is it possible to do "zfs get -??? quota filesystem" ? > > Thanks. > > Fred