F. Wessels
2008-Sep-03 09:20 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
Hi, can anybody describe the correct procedure to replace a disk (in a working OK state) with a another disk without degrading my pool? For a mirror I thought off adding the spare, you''ll get a three device mirror. Let it resilver. Finally remove the disk I want. But what would be the correct commands? And what if I''ve got a pool consisting of multiple mirror vdev''s? And what about a raid-z or raid-z2 vdev? I can pull a disk and let the hotspare take it''s place. But that degrades the pool. I want to mirror the two disks and when done remove the source disk. This way I''ll never have a degraded pool. Or am I asking for a new zpool feature? Thanks, Frederik -- This message posted from opensolaris.org
Mark J. Musante
2008-Sep-03 09:50 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
On 3 Sep 2008, at 05:20, "F. Wessels" <wessels147 at yahoo.com> wrote:> Hi, > > can anybody describe the correct procedure to replace a disk (in a > working OK state) with a another disk without degrading my pool?This command ought to do the trick: zfs replace <pool> <old-disk> <new-disk> The type of pool doesn''t matter. Regards, markm
Enda O''Connor
2008-Sep-03 10:46 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
Mark J. Musante wrote:> > On 3 Sep 2008, at 05:20, "F. Wessels" <wessels147 at yahoo.com> wrote: > >> Hi, >> >> can anybody describe the correct procedure to replace a disk (in a >> working OK state) with a another disk without degrading my pool? > > This command ought to do the trick: > > zfs replace <pool> <old-disk> <new-disk>Slight typo above, it''s zpool replace is the command By the way what is the pool config, I assume you have a pool that supports this :-) Once the disk is added, a resilver will occur, so do not take snapshots till it has finished, as the resilver will be restarted, this is fixed in snv_94 though. Enda> > The type of pool doesn''t matter. > > > Regards, > markm > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ross
2008-Sep-03 11:05 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
I''m pretty sure you just need the zpool replace command: # zpool replace <poolname> <olddisk> <newdisk> Run that for the disk you want to replace and let it resilver. Once it''s done, you can unconfigure the old disk with cfgadm and remove it. If you have multiple mirror vdev''s, you''ll need to run the command a few times. I expect you can replace several drives at once but I''ve not tried that personally. -- This message posted from opensolaris.org
Ross
2008-Sep-03 11:08 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
Gaah, my command got nerfed by the forum, sorry, should have previewed. What you want is: # zpool replace poolname olddisk newdisk -- This message posted from opensolaris.org
Jerry K
2008-Sep-03 14:18 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
How would this work for servers that support only (2) drives, or systems that are configured to have pools of (2) drives, i.e. mirrors, and there is no additional space to have a new disk, as shown in the sample below. I still support lots of V490''s, which hold only (2) drives. Thanks, Jerry Ross wrote:> Gaah, my command got nerfed by the forum, sorry, should have previewed. What you want is: > # zpool replace poolname olddisk newdisk > --
Bob Friesenhahn
2008-Sep-03 15:38 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
On Wed, 3 Sep 2008, Jerry K wrote:> How would this work for servers that support only (2) drives, or systems > that are configured to have pools of (2) drives, i.e. mirrors, and > there is no additional space to have a new disk, as shown in the sample > below.You may be able to accomplish what you want by using an intermediate temporary disk and doubling the work (two replacements). Perhaps the server supports USB so it can use an external USB drive as the initial replacement. There is also the possibility of replacing the disk with a suitably sized disk file which is stored on some other server or an independent local filesystem with enough space. You could access temporary storage on another server using iSCSI. Server performance may suck while the inferior temporary device is in place. Whatever you do, make sure that the intermediate storage is never any larger than the final device will be. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Jerry K
2008-Sep-03 16:10 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
Hello Bob, Thank you for your reply. Your final sentence is a gem I will keep. As far as the rest, I have a lot of production server that are (2) drive systems, and I really hope that there is a mechanism to quickly R&R dead drives, resilvering aside. I guess I need to do some more RTFMing into this. Jerry K. Bob Friesenhahn wrote:> On Wed, 3 Sep 2008, Jerry K wrote: > >> How would this work for servers that support only (2) drives, or systems >> that are configured to have pools of (2) drives, i.e. mirrors, and >> there is no additional space to have a new disk, as shown in the sample >> below. > > You may be able to accomplish what you want by using an intermediate > temporary disk and doubling the work (two replacements). Perhaps the > server supports USB so it can use an external USB drive as the initial > replacement. There is also the possibility of replacing the disk with a > suitably sized disk file which is stored on some other server or an > independent local filesystem with enough space. You could access > temporary storage on another server using iSCSI. Server performance may > suck while the inferior temporary device is in place. > > Whatever you do, make sure that the intermediate storage is never any > larger than the final device will be. > > Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Mattias Pantzare
2008-Sep-03 18:03 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
2008/9/3 Jerry K <sun.mail.list at oryx.cc>:> Hello Bob, > > Thank you for your reply. Your final sentence is a gem I will keep. > > As far as the rest, I have a lot of production server that are (2) drive > systems, and I really hope that there is a mechanism to quickly R&R dead > drives, resilvering aside. I guess I need to do some more RTFMing into > this.If the drive is dead the pool is already in degraded mode. You simply replace the failed drive and tell zfs that it was replaced: zpool replace pool device
F. Wessels
2008-Sep-04 09:20 UTC
[zfs-discuss] What is the correct procedure to replace a non failed disk for another?
Thanks for the replies. I guess I misunderstood the manual: zpool replace [-f] pool old_device [new_device] Replaces old_device with new_device. This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device. The size of new_device must be greater than or equal to the minimum size of all the devices in a mirror or raidz configuration. new_device is required if the pool is not redundant. If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same /dev/dsk path as the old device, even though it is actually a different disk. ZFS recognizes this. In the last paragraph it stated failed and physically removed etc. Also in the first paragraph it stated resilver. To summarize, "zpool replace" can replace a disk in any vdev type without compromising redundancy. If the new disk fails (during resilvering) the old pool remains in it''s original state. After resilvering the new disk the old one gets detached. All this time the pool remains in it''s original state. (off course no other factor kicks in) PS. Why can''t I see the comments from Bob and Jerry and others made after the last comment from Ross? I can see the comments in the text based site at http://mail.opensolaris.org/pipermail/zfs-discuss/2008-September but not at http://www.opensolaris.org/ at which I''m currently posting this message. -- This message posted from opensolaris.org