James Dickens
2006-Mar-30 21:42 UTC
[zfs-discuss] zfs experiment, devices show up with old name after they have been moved
ZFS experiment My fileserver is a u2, and 711 disk box attached, up untill now it was plugged into the on board controller c0, today I added a sunswift card (fast/wide scsi+100mbit nic ). To test if ZFS was as smart as svm, I powered down the box. Installed the new scsi controller, and moved the cable c1 (the new scsi port) and powered it up. All the filesystems even ZFS came up no changes needed, did fine one small annoyance, the drives are still reported by zpool status as being attached to c0. -bash-3.00$ uname -av SunOS enterprise 5.11 snv_27 sun4u sparc SUNW,Ultra-2 -bash-3.00$ -bash-3.00$ /usr/sbin/zpool status -v pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 raidz ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 c0t9d0 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 -bash-3.00$ When they are actually connected to controller 1 as shown below. SUNW,fas, instance #0 sd (driver not attached) st (driver not attached) sd, instance #1 sd, instance #6 st, instance #0 (driver not attached) st, instance #1 (driver not attached) st, instance #2 (driver not attached) st, instance #3 (driver not attached) st, instance #4 (driver not attached) st, instance #5 (driver not attached) st, instance #6 (driver not attached) SUNW,fas, instance #1 sd (driver not attached) st (driver not attached) sd, instance #17 sd, instance #18 sd, instance #19 sd, instance #22 sd, instance #23 sd, instance #24 sd, instance #25 sd, instance #26 sd, instance #27 sd, instance #28
Tao Chen
2006-Mar-30 22:25 UTC
[zfs-discuss] zfs experiment, devices show up with old name after they have been moved
On 3/30/06, James Dickens <jamesd.wi at gmail.com> wrote:> To test if ZFS was as smart as > svm, I powered down the box. Installed the new scsi controller, and > moved the cable c1 (the new scsi port) and powered it up. All the > filesystems even ZFS came up no changes needed, did fine one small > annoyance, the drives are still reported by zpool status as being > attached to c0.Eric responded to the same problem earlier in this thread: http://www.opensolaris.org/jive/thread.jspa?messageID=16836䇄 "Expect this to be fixed in the near future." I don''t know if it''s already fixed. Tao Tao
Eric Schrock
2006-Mar-30 22:45 UTC
[zfs-discuss] zfs experiment, devices show up with old name after they have been moved
Yes, this was fixed a while ago - certainly after build 27. If you look at zpool_vdev_name(): http://cvs.opensolaris.org/source/xref/on/usr/src/lib/libzfs/common/libzfs_pool.c#1397 You''ll see that whenever you request a disk name (such as via ''zpool status''), we check to see if it has the same devid. If not, we do a reverse devid -> path mapping and update the on-disk data as well. Note that there is one oddity: if you reconfigure devices but run ''zpool status'' as a non-root user, then it cannot do the reverse mapping. Once you run it once as root, you''ll be fine. I''ve debated putting a "zpool status > /dev/null" as part of the startup scripts to at least catch the cases when reconfiguration is done with the power off. - Eric On Thu, Mar 30, 2006 at 03:42:25PM -0600, James Dickens wrote:> ZFS experiment > > My fileserver is a u2, and 711 disk box attached, up untill now it was > plugged into the on board controller c0, today I added a sunswift > card (fast/wide scsi+100mbit nic ). To test if ZFS was as smart as > svm, I powered down the box. Installed the new scsi controller, and > moved the cable c1 (the new scsi port) and powered it up. All the > filesystems even ZFS came up no changes needed, did fine one small > annoyance, the drives are still reported by zpool status as being > attached to c0. > > > -bash-3.00$ uname -av > SunOS enterprise 5.11 snv_27 sun4u sparc SUNW,Ultra-2 > -bash-3.00$ > > -bash-3.00$ /usr/sbin/zpool status -v > pool: data > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > raidz ONLINE 0 0 0 > c0t11d0 ONLINE 0 0 0 > c0t12d0 ONLINE 0 0 0 > c0t13d0 ONLINE 0 0 0 > c0t14d0 ONLINE 0 0 0 > raidz ONLINE 0 0 0 > c0t3d0 ONLINE 0 0 0 > c0t4d0 ONLINE 0 0 0 > c0t8d0 ONLINE 0 0 0 > c0t9d0 ONLINE 0 0 0 > c0t10d0 ONLINE 0 0 0 > -bash-3.00$ > > When they are actually connected to controller 1 as shown below. > > > SUNW,fas, instance #0 > sd (driver not attached) > st (driver not attached) > sd, instance #1 > sd, instance #6 > st, instance #0 (driver not attached) > st, instance #1 (driver not attached) > st, instance #2 (driver not attached) > st, instance #3 (driver not attached) > st, instance #4 (driver not attached) > st, instance #5 (driver not attached) > st, instance #6 (driver not attached) > SUNW,fas, instance #1 > sd (driver not attached) > st (driver not attached) > sd, instance #17 > sd, instance #18 > sd, instance #19 > sd, instance #22 > sd, instance #23 > sd, instance #24 > sd, instance #25 > sd, instance #26 > sd, instance #27 > sd, instance #28 > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Thomas Maier-Komor
2006-Apr-02 13:50 UTC
[zfs-discuss] Re: zfs experiment, devices show up with old name after they have been mo
> Yes, this was fixed a while ago - certainly after > build 27. If you look > at zpool_vdev_name(): > > http://cvs.opensolaris.org/source/xref/on/usr/src/lib/ > libzfs/common/libzfs_pool.c#1397 > > ou''ll see that whenever you request a disk name (such > as via ''zpool > status''), we check to see if it has the same devid. > If not, we do a > everse devid -> path mapping and update the on-disk > data as well. Note > that there is one oddity: if you reconfigure devices > but run ''zpool > status'' as a non-root user, then it cannot do the > reverse mapping. Once > you run it once as root, you''ll be fine. I''ve > debated putting a "zpool > status > /dev/null" as part of the startup scripts to > at least catch the > cases when reconfiguration is done with the power > off. > > - Eric >I''d even vote for a unconditional "zpool status" without redirection to /dev/null, at least on a "boot -v". In my opinion this is useful information, especially as devices generally have a tendency to die during powerup. I.e. one sees immediately if everything is healthy, when one powers up a machine that was offline. A "zpool list" might also be useful, but this is rather a matter of taste. Tom This message posted from opensolaris.org