Torrey McMahon
2007-Jan-10 16:26 UTC
[zfs-discuss] using zpool attach/detach to migrate drives from one
Derek E. Lewis wrote:> Greetings, > > I''m trying to move some of my mirrored pooldevs to another controller. > I have a StorEdge A5200 (Photon) with two physical paths to it, and > originally, when I created the storage pool, I threw all of the drives > on c1. Several days after my realization of this, I''m trying to change > the mirrored pooldevs to c2 (c1t53d0 -> c2t53d0). At first, ''zpool > replace'' seemed ideal; however, it warned that c2t53d0 was already an > existing pooldev for the pool. I then tried detaching the c1 device > and re-attaching the c2 device; however, this caused a complete > resilver, which is very expensive. This is a Solaris 10 11/06 system > -- any chance zpool attach/detach has become more intelligent in > Solaris Express? Perhaps, ''zpool replace'' was the right way to go > about it?One, use mpxio from now on. Two, I thought you could export the pool, move the LUNs to the new controller, and import the pool? _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
George Wilson
2007-Jan-10 16:26 UTC
[zfs-discuss] using zpool attach/detach to migrate drives from one
Derek, I don''t think ''zpool attach/detach'' is what you want as it will always result in a complete resilver. You''re best bet is to export and re-import the pool after moving devices. You might also try to ''zpool offline'' the device, move it and then ''zpool online'' it. This should force a reopen of the device and then it would only have to resilver the transactions during the time that the device was offline. I have not tried the later but it should work. Thanks, George Derek E. Lewis wrote:> Greetings, > > I''m trying to move some of my mirrored pooldevs to another controller. I > have a StorEdge A5200 (Photon) with two physical paths to it, and > originally, when I created the storage pool, I threw all of the drives > on c1. Several days after my realization of this, I''m trying to change > the mirrored pooldevs to c2 (c1t53d0 -> c2t53d0). At first, ''zpool > replace'' seemed ideal; however, it warned that c2t53d0 was already an > existing pooldev for the pool. I then tried detaching the c1 device and > re-attaching the c2 device; however, this caused a complete resilver, > which is very expensive. This is a Solaris 10 11/06 system -- any chance > zpool attach/detach has become more intelligent in Solaris Express? > Perhaps, ''zpool replace'' was the right way to go about it? > > Thanks, > > Derek E. Lewis > delewis@acm.org > http://delewis.blogspot.com > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
George Wilson
2007-Jan-10 16:26 UTC
[zfs-discuss] using zpool attach/detach to migrate drives from one
Derek, Have you tried doing a ''zpool replace poolname c1t53d0 c2t53d0''? I''m not sure if this will work but worth a shot. You may still end up with a complete resilver. Thanks, George Derek E. Lewis wrote:> On Thu, 28 Dec 2006, George Wilson wrote: > >> You''re best bet is to export and re-import the pool after moving >> devices. You might also try to ''zpool offline'' the device, move it and >> then ''zpool online'' it. This should force a reopen of the device and >> then it would only have to resilver the transactions during the time >> that the device was offline. I have not tried the later but it should >> work. > > George, > > I haven''t moved any devices around. I have two physical paths to the > JBOD, which allows the system to see all the disks on two different > controllers (c1t53d0 and c2t53d0 are already there). ''zfs > online/offline'' and ''zfs import/export'' aren''t going to help at all > unless I physically swap the fibre paths. This won''t work because I have > other pools on the JBOD. > > If this were a production system, exporting the entire pool would not be > ideal, just to change the controller the mirrored pooldevs are using. If > ZFS cannot do this without (1) exporting the pool and importing it or > (2) doing a complete resilver of the disk(s), this sounds like a valid > RFE for a more intelligent ''zfs replace'' or ''zfs attach/detach''. > > Thanks, > > Derek E. Lewis > delewis@acm.org > http://delewis.blogspot.com_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Derek E. Lewis
2007-Jan-10 16:26 UTC
[zfs-discuss] using zpool attach/detach to migrate drives from one
On Wed, 27 Dec 2006, Torrey McMahon wrote:> One, use mpxio from now on.`socal'' HBAs are not supported under MPxIO, which is what I have in the attached host (an E4500).> Two, I thought you could export the pool, move the LUNs to the new > controller, and import the pool?Like I said, I have two physical paths, so the drives are already visible to the system. It''s also a mirrored pool, so every pooldev also has an equivalent mirror pooldev. I shouldn''t need to export the entire pool just to change the controller the mirrored pooldevs are using. Thanks, Derek E. Lewis delewis@acm.org http://delewis.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Derek E. Lewis
2007-Jan-10 16:26 UTC
[zfs-discuss] using zpool attach/detach to migrate drives from one
On Thu, 28 Dec 2006, George Wilson wrote:> Have you tried doing a ''zpool replace poolname c1t53d0 c2t53d0''? I''m not sure > if this will work but worth a shot. You may still end up with a complete > resilver.George, Just tried it with a ''-f'' and I received the following error: # zpool replace -f export c2t53d0 c1t53d0 invalid vdev specification the following errors must be manually repaired: /dev/dsk/c2t53d0s0 is part of active ZFS pool export. Please see zpool(1M). (I had already done a ''zfs detach/attach'' on that disk and went through with the resilvering, hence the replacement of c2t53d0 with c1t53d0). Thanks, Derek E. Lewis delewis@acm.org http://delewis.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Derek E. Lewis
2007-Jan-10 16:26 UTC
[zfs-discuss] using zpool attach/detach to migrate drives from one
On Thu, 28 Dec 2006, George Wilson wrote:> You''re best bet is to export and re-import the pool after moving devices. You > might also try to ''zpool offline'' the device, move it and then ''zpool online'' > it. This should force a reopen of the device and then it would only have to > resilver the transactions during the time that the device was offline. I have > not tried the later but it should work.George, I haven''t moved any devices around. I have two physical paths to the JBOD, which allows the system to see all the disks on two different controllers (c1t53d0 and c2t53d0 are already there). ''zfs online/offline'' and ''zfs import/export'' aren''t going to help at all unless I physically swap the fibre paths. This won''t work because I have other pools on the JBOD. If this were a production system, exporting the entire pool would not be ideal, just to change the controller the mirrored pooldevs are using. If ZFS cannot do this without (1) exporting the pool and importing it or (2) doing a complete resilver of the disk(s), this sounds like a valid RFE for a more intelligent ''zfs replace'' or ''zfs attach/detach''. Thanks, Derek E. Lewis delewis@acm.org http://delewis.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Richard Elling
2007-Jan-10 16:26 UTC
[zfs-discuss] using zpool attach/detach to migrate drives from one
I think ZFS might be too smart here. The feature we like is that ZFS will find the devices no matter what their path is. This is very much a highly desired feature. If there are multiple paths to the same LUN, then it does expect an intermediary to handle that: MPxIO, PowerPath, etc. Unfortunately, the FC disks and hubs in the A5000 are quite limited in their ability to do clever things, as is the socal interface. It is no wonder they were EOLed long ago. As-is, the ability of ZFS to rediscover the A5000+socal disks when a loop is dead may be a manual process or automatic on reboot (qv. discussions here on [un]desired panics) -- richard Derek E. Lewis wrote:> On Thu, 28 Dec 2006, George Wilson wrote: > >> You''re best bet is to export and re-import the pool after moving >> devices. You might also try to ''zpool offline'' the device, move it and >> then ''zpool online'' it. This should force a reopen of the device and >> then it would only have to resilver the transactions during the time >> that the device was offline. I have not tried the later but it should >> work. > > George, > > I haven''t moved any devices around. I have two physical paths to the > JBOD, which allows the system to see all the disks on two different > controllers (c1t53d0 and c2t53d0 are already there). ''zfs > online/offline'' and ''zfs import/export'' aren''t going to help at all > unless I physically swap the fibre paths. This won''t work because I have > other pools on the JBOD. > > If this were a production system, exporting the entire pool would not be > ideal, just to change the controller the mirrored pooldevs are using. If > ZFS cannot do this without (1) exporting the pool and importing it or > (2) doing a complete resilver of the disk(s), this sounds like a valid > RFE for a more intelligent ''zfs replace'' or ''zfs attach/detach''. > > Thanks, > > Derek E. Lewis > delewis@acm.org > http://delewis.blogspot.com > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Derek E. Lewis
2007-Jan-10 16:26 UTC
[zfs-discuss] using zpool attach/detach to migrate drives from one
Greetings, I''m trying to move some of my mirrored pooldevs to another controller. I have a StorEdge A5200 (Photon) with two physical paths to it, and originally, when I created the storage pool, I threw all of the drives on c1. Several days after my realization of this, I''m trying to change the mirrored pooldevs to c2 (c1t53d0 -> c2t53d0). At first, ''zpool replace'' seemed ideal; however, it warned that c2t53d0 was already an existing pooldev for the pool. I then tried detaching the c1 device and re-attaching the c2 device; however, this caused a complete resilver, which is very expensive. This is a Solaris 10 11/06 system -- any chance zpool attach/detach has become more intelligent in Solaris Express? Perhaps, ''zpool replace'' was the right way to go about it? Thanks, Derek E. Lewis delewis@acm.org http://delewis.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss