my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2 (by the way, I thought U3 would be out in November, will it be out soon? does anyone know? [11:35:14] server11: /export/home/me > zpool status -x pool: mypool2 state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using ''zpool online''. see: http://www.sun.com/msg/ZFS-8000-D3 scrub: none requested config: NAME STATE READ WRITE CKSUM mypool2 DEGRADED 0 0 0 raidz DEGRADED 0 0 0 c3t0d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 c3t4d0 ONLINE 0 0 0 c3t5d0 ONLINE 0 0 0 c3t6d0 UNAVAIL 0 679 0 cannot open errors: No known data errors
Hi Krzys, On Thu, 2006-11-30 at 12:09 -0500, Krzys wrote:> my drive did go bad on me, how do I replace it?You should be able to do this using zpool replace. There''s output below from me simulating your situation with file-based pools. This is documented in Chapters 7 and 10 of the ZFS admin guide at: http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qt0?a=view Hope this helps, cheers, tim # zpool status -v pool: ts-auto-pool state: ONLINE scrub: resilver completed with 0 errors on Thu Nov 30 17:14:25 2006 config: NAME STATE READ WRITE CKSUM ts-auto-pool ONLINE 0 0 0 mirror ONLINE 0 0 0 /ts-auto-pool.dat ONLINE 0 0 0 /file1 ONLINE 0 0 0 errors: No known data errors # rm -rf /file1 (blammo - device gone!) # zpool scrub ts-auto-pool # zpool status -v pool: ts-auto-pool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using ''zpool online''. see: http://www.sun.com/msg/ZFS-8000-D3 scrub: scrub completed with 0 errors on Thu Nov 30 17:14:47 2006 config: NAME STATE READ WRITE CKSUM ts-auto-pool DEGRADED 0 0 0 mirror DEGRADED 0 0 0 /ts-auto-pool.dat ONLINE 0 0 0 /file1 UNAVAIL 0 0 0 cannot open errors: No known data errors # zpool replace ts-auto-pool /file1 /file2 # zpool status -v pool: ts-auto-pool state: DEGRADED scrub: resilver completed with 0 errors on Thu Nov 30 17:15:13 2006 config: NAME STATE READ WRITE CKSUM ts-auto-pool DEGRADED 0 0 0 mirror DEGRADED 0 0 0 /ts-auto-pool.dat ONLINE 0 0 0 replacing DEGRADED 0 0 0 /file1 UNAVAIL 0 0 0 cannot open /file2 ONLINE 0 0 0 errors: No known data errors # zpool status -v pool: ts-auto-pool state: ONLINE scrub: resilver completed with 0 errors on Thu Nov 30 17:15:13 2006 config: NAME STATE READ WRITE CKSUM ts-auto-pool ONLINE 0 0 0 mirror ONLINE 0 0 0 /ts-auto-pool.dat ONLINE 0 0 0 /file2 ONLINE 0 0 0 errors: No known data errors #> I am sunning solaris 10 U2 (by > the way, I thought U3 would be out in November, will it be out soon? does anyone > know? > > > [11:35:14] server11: /export/home/me > zpool status -x > pool: mypool2 > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using ''zpool online''. > see: http://www.sun.com/msg/ZFS-8000-D3 > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > mypool2 DEGRADED 0 0 0 > raidz DEGRADED 0 0 0 > c3t0d0 ONLINE 0 0 0 > c3t1d0 ONLINE 0 0 0 > c3t2d0 ONLINE 0 0 0 > c3t3d0 ONLINE 0 0 0 > c3t4d0 ONLINE 0 0 0 > c3t5d0 ONLINE 0 0 0 > c3t6d0 UNAVAIL 0 679 0 cannot open > > errors: No known data errors-- Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops http://blogs.sun.com/timf
Krzys wrote:> > my drive did go bad on me, how do I replace it? I am sunning solaris 10 > U2 (by the way, I thought U3 would be out in November, will it be out > soon? does anyone know? > > > [11:35:14] server11: /export/home/me > zpool status -x > pool: mypool2 > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas > exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using ''zpool online''. > see: http://www.sun.com/msg/ZFS-8000-D3 > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > mypool2 DEGRADED 0 0 0 > raidz DEGRADED 0 0 0 > c3t0d0 ONLINE 0 0 0 > c3t1d0 ONLINE 0 0 0 > c3t2d0 ONLINE 0 0 0 > c3t3d0 ONLINE 0 0 0 > c3t4d0 ONLINE 0 0 0 > c3t5d0 ONLINE 0 0 0 > c3t6d0 UNAVAIL 0 679 0 cannot open > > errors: No known data errors > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussShut down the machine, replace the drive, reboot and type: zpool replace mypool2 c3t6d0 On earlier versions of ZFS I found it useful to do this at the login prompt; it seemed fairly memory intensive. - Bart -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts
One minor comment is to identify the replacement drive, like this: # zpool replace mypool2 c3t6d0 c3t7d0 Otherwise, zpool will error... cs Bart Smaalders wrote:> Krzys wrote: > >> >> my drive did go bad on me, how do I replace it? I am sunning solaris >> 10 U2 (by the way, I thought U3 would be out in November, will it be >> out soon? does anyone know? >> >> >> [11:35:14] server11: /export/home/me > zpool status -x >> pool: mypool2 >> state: DEGRADED >> status: One or more devices could not be opened. Sufficient replicas >> exist for >> the pool to continue functioning in a degraded state. >> action: Attach the missing device and online it using ''zpool online''. >> see: http://www.sun.com/msg/ZFS-8000-D3 >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> mypool2 DEGRADED 0 0 0 >> raidz DEGRADED 0 0 0 >> c3t0d0 ONLINE 0 0 0 >> c3t1d0 ONLINE 0 0 0 >> c3t2d0 ONLINE 0 0 0 >> c3t3d0 ONLINE 0 0 0 >> c3t4d0 ONLINE 0 0 0 >> c3t5d0 ONLINE 0 0 0 >> c3t6d0 UNAVAIL 0 679 0 cannot open >> >> errors: No known data errors >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > Shut down the machine, replace the drive, reboot > and type: > > zpool replace mypool2 c3t6d0 > > > On earlier versions of ZFS I found it useful to do this > at the login prompt; it seemed fairly memory intensive. > > - Bart > >
Sorry, Bart, is correct: If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same /dev/dsk path as the old device, even though it is actu- ally a different disk. ZFS recognizes this. cs Cindy Swearingen wrote:> One minor comment is to identify the replacement drive, like this: > > # zpool replace mypool2 c3t6d0 c3t7d0 > > Otherwise, zpool will error... > > cs > > Bart Smaalders wrote: > >> Krzys wrote: >> >>> >>> my drive did go bad on me, how do I replace it? I am sunning solaris >>> 10 U2 (by the way, I thought U3 would be out in November, will it be >>> out soon? does anyone know? >>> >>> >>> [11:35:14] server11: /export/home/me > zpool status -x >>> pool: mypool2 >>> state: DEGRADED >>> status: One or more devices could not be opened. Sufficient replicas >>> exist for >>> the pool to continue functioning in a degraded state. >>> action: Attach the missing device and online it using ''zpool online''. >>> see: http://www.sun.com/msg/ZFS-8000-D3 >>> scrub: none requested >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> mypool2 DEGRADED 0 0 0 >>> raidz DEGRADED 0 0 0 >>> c3t0d0 ONLINE 0 0 0 >>> c3t1d0 ONLINE 0 0 0 >>> c3t2d0 ONLINE 0 0 0 >>> c3t3d0 ONLINE 0 0 0 >>> c3t4d0 ONLINE 0 0 0 >>> c3t5d0 ONLINE 0 0 0 >>> c3t6d0 UNAVAIL 0 679 0 cannot open >>> >>> errors: No known data errors >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> >> Shut down the machine, replace the drive, reboot >> and type: >> >> zpool replace mypool2 c3t6d0 >> >> >> On earlier versions of ZFS I found it useful to do this >> at the login prompt; it seemed fairly memory intensive. >> >> - Bart >> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Great, thank you, it certainly helped, I did not want to loose data on that disk therefore wanted to be sure than sorry.... thanks for help. Chris On Thu, 30 Nov 2006, Bart Smaalders wrote:> Krzys wrote: >> >> my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2 >> (by the way, I thought U3 would be out in November, will it be out soon? >> does anyone know? >> >> >> [11:35:14] server11: /export/home/me > zpool status -x >> pool: mypool2 >> state: DEGRADED >> status: One or more devices could not be opened. Sufficient replicas exist >> for >> the pool to continue functioning in a degraded state. >> action: Attach the missing device and online it using ''zpool online''. >> see: http://www.sun.com/msg/ZFS-8000-D3 >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> mypool2 DEGRADED 0 0 0 >> raidz DEGRADED 0 0 0 >> c3t0d0 ONLINE 0 0 0 >> c3t1d0 ONLINE 0 0 0 >> c3t2d0 ONLINE 0 0 0 >> c3t3d0 ONLINE 0 0 0 >> c3t4d0 ONLINE 0 0 0 >> c3t5d0 ONLINE 0 0 0 >> c3t6d0 UNAVAIL 0 679 0 cannot open >> >> errors: No known data errors >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > Shut down the machine, replace the drive, reboot > and type: > > zpool replace mypool2 c3t6d0 > > > On earlier versions of ZFS I found it useful to do this > at the login prompt; it seemed fairly memory intensive. > > - Bart > > > -- > Bart Smaalders Solaris Kernel Performance > barts at cyber.eng.sun.com http://blogs.sun.com/barts > > > !DSPAM:122,456f173b1758223226276! >
Hold on, so I need to add another drive to the system for the replacemnt? I do not have any more slots in my system to add another disk to it. :( Chris On Thu, 30 Nov 2006, Cindy Swearingen wrote:> One minor comment is to identify the replacement drive, like this: > > # zpool replace mypool2 c3t6d0 c3t7d0 > > Otherwise, zpool will error... > > cs > > Bart Smaalders wrote: >> Krzys wrote: >> >>> >>> my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2 >>> (by the way, I thought U3 would be out in November, will it be out soon? >>> does anyone know? >>> >>> >>> [11:35:14] server11: /export/home/me > zpool status -x >>> pool: mypool2 >>> state: DEGRADED >>> status: One or more devices could not be opened. Sufficient replicas >>> exist for >>> the pool to continue functioning in a degraded state. >>> action: Attach the missing device and online it using ''zpool online''. >>> see: http://www.sun.com/msg/ZFS-8000-D3 >>> scrub: none requested >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> mypool2 DEGRADED 0 0 0 >>> raidz DEGRADED 0 0 0 >>> c3t0d0 ONLINE 0 0 0 >>> c3t1d0 ONLINE 0 0 0 >>> c3t2d0 ONLINE 0 0 0 >>> c3t3d0 ONLINE 0 0 0 >>> c3t4d0 ONLINE 0 0 0 >>> c3t5d0 ONLINE 0 0 0 >>> c3t6d0 UNAVAIL 0 679 0 cannot open >>> >>> errors: No known data errors >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> Shut down the machine, replace the drive, reboot >> and type: >> >> zpool replace mypool2 c3t6d0 >> >> >> On earlier versions of ZFS I found it useful to do this >> at the login prompt; it seemed fairly memory intensive. >> >> - Bart >> >> > > > !DSPAM:122,456f19db20104107554647! >
Ah, did not see your follow up. Thanks. Chris On Thu, 30 Nov 2006, Cindy Swearingen wrote:> Sorry, Bart, is correct: > > If new_device is not specified, it defaults to > old_device. This form of replacement is useful after an > existing disk has failed and has been physically > replaced. In this case, the new disk may have the same > /dev/dsk path as the old device, even though it is actu- > ally a different disk. ZFS recognizes this. > > cs > > Cindy Swearingen wrote: >> One minor comment is to identify the replacement drive, like this: >> >> # zpool replace mypool2 c3t6d0 c3t7d0 >> >> Otherwise, zpool will error... >> >> cs >> >> Bart Smaalders wrote: >> >>> Krzys wrote: >>> >>>> >>>> my drive did go bad on me, how do I replace it? I am sunning solaris 10 >>>> U2 (by the way, I thought U3 would be out in November, will it be out >>>> soon? does anyone know? >>>> >>>> >>>> [11:35:14] server11: /export/home/me > zpool status -x >>>> pool: mypool2 >>>> state: DEGRADED >>>> status: One or more devices could not be opened. Sufficient replicas >>>> exist for >>>> the pool to continue functioning in a degraded state. >>>> action: Attach the missing device and online it using ''zpool online''. >>>> see: http://www.sun.com/msg/ZFS-8000-D3 >>>> scrub: none requested >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> mypool2 DEGRADED 0 0 0 >>>> raidz DEGRADED 0 0 0 >>>> c3t0d0 ONLINE 0 0 0 >>>> c3t1d0 ONLINE 0 0 0 >>>> c3t2d0 ONLINE 0 0 0 >>>> c3t3d0 ONLINE 0 0 0 >>>> c3t4d0 ONLINE 0 0 0 >>>> c3t5d0 ONLINE 0 0 0 >>>> c3t6d0 UNAVAIL 0 679 0 cannot open >>>> >>>> errors: No known data errors >>>> _______________________________________________ >>>> zfs-discuss mailing list >>>> zfs-discuss at opensolaris.org >>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >>> >>> >>> Shut down the machine, replace the drive, reboot >>> and type: >>> >>> zpool replace mypool2 c3t6d0 >>> >>> >>> On earlier versions of ZFS I found it useful to do this >>> at the login prompt; it seemed fairly memory intensive. >>> >>> - Bart >>> >>> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > !DSPAM:122,456f1b0c21174266247132! >
In the same vein... I currently have a 400GB disk that is full of data on a linux system. If I buy 2 more disks and put them into a raid-z''ed zfs under solaris, is there a generally accepted way to build an degraded array with the 2 disks, copy the data to the new filesystem, and then move the original disk to complete the array? Thanks! Thomas On 11/30/06, Krzys <krzys at perfekt.net> wrote:> Ah, did not see your follow up. Thanks. > > Chris > > > On Thu, 30 Nov 2006, Cindy Swearingen wrote: > > > Sorry, Bart, is correct: > > > > If new_device is not specified, it defaults to > > old_device. This form of replacement is useful after an > > existing disk has failed and has been physically > > replaced. In this case, the new disk may have the same > > /dev/dsk path as the old device, even though it is actu- > > ally a different disk. ZFS recognizes this. > > > > cs > > > > Cindy Swearingen wrote: > >> One minor comment is to identify the replacement drive, like this: > >> > >> # zpool replace mypool2 c3t6d0 c3t7d0 > >> > >> Otherwise, zpool will error... > >> > >> cs > >> > >> Bart Smaalders wrote: > >> > >>> Krzys wrote: > >>> > >>>> > >>>> my drive did go bad on me, how do I replace it? I am sunning solaris 10 > >>>> U2 (by the way, I thought U3 would be out in November, will it be out > >>>> soon? does anyone know? > >>>> > >>>> > >>>> [11:35:14] server11: /export/home/me > zpool status -x > >>>> pool: mypool2 > >>>> state: DEGRADED > >>>> status: One or more devices could not be opened. Sufficient replicas > >>>> exist for > >>>> the pool to continue functioning in a degraded state. > >>>> action: Attach the missing device and online it using ''zpool online''. > >>>> see: http://www.sun.com/msg/ZFS-8000-D3 > >>>> scrub: none requested > >>>> config: > >>>> > >>>> NAME STATE READ WRITE CKSUM > >>>> mypool2 DEGRADED 0 0 0 > >>>> raidz DEGRADED 0 0 0 > >>>> c3t0d0 ONLINE 0 0 0 > >>>> c3t1d0 ONLINE 0 0 0 > >>>> c3t2d0 ONLINE 0 0 0 > >>>> c3t3d0 ONLINE 0 0 0 > >>>> c3t4d0 ONLINE 0 0 0 > >>>> c3t5d0 ONLINE 0 0 0 > >>>> c3t6d0 UNAVAIL 0 679 0 cannot open > >>>> > >>>> errors: No known data errors > >>>> _______________________________________________ > >>>> zfs-discuss mailing list > >>>> zfs-discuss at opensolaris.org > >>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >>> > >>> > >>> > >>> Shut down the machine, replace the drive, reboot > >>> and type: > >>> > >>> zpool replace mypool2 c3t6d0 > >>> > >>> > >>> On earlier versions of ZFS I found it useful to do this > >>> at the login prompt; it seemed fairly memory intensive. > >>> > >>> - Bart > >>> > >>> > >> _______________________________________________ > >> zfs-discuss mailing list > >> zfs-discuss at opensolaris.org > >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > > !DSPAM:122,456f1b0c21174266247132! > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500:> I currently have a 400GB disk that is full of data on a linux system. > If I buy 2 more disks and put them into a raid-z''ed zfs under solaris, > is there a generally accepted way to build an degraded array with the > 2 disks, copy the data to the new filesystem, and then move the > original disk to complete the array?No, because we currently can''t add disks to a raidz array. You could create a mirror instead and then add in the other disk to make a three-way mirror, though. Even doing that would be dicey if you only have a single machine, though, since Solaris can''t natively read the popular Linux filesystems. I believe there is freeware to do it, but nothing supported. David
So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? On 12/5/06, David Bustos <David.Bustos at sun.com> wrote:> Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500: > > I currently have a 400GB disk that is full of data on a linux system. > > If I buy 2 more disks and put them into a raid-z''ed zfs under solaris, > > is there a generally accepted way to build an degraded array with the > > 2 disks, copy the data to the new filesystem, and then move the > > original disk to complete the array? > > No, because we currently can''t add disks to a raidz array. You could > create a mirror instead and then add in the other disk to make > a three-way mirror, though. > > Even doing that would be dicey if you only have a single machine, > though, since Solaris can''t natively read the popular Linux filesystems. > I believe there is freeware to do it, but nothing supported. > > > David >
Creating an array configuration with one element being a sparse file, then removing that file, comes to mind, but I wouldn''t want to be the first to attempt it. ;-) This message posted from opensolaris.org
> So there is no current way to specify the creation of > a 3 disk raid-z > array with a known missing disk?Can someone answer that? Or does the zpool command NOT accommodate the creation of a degraded raidz array? This message posted from opensolaris.org
On Nov 20, 2007 6:34 AM, MC <rac at eastlink.ca> wrote:> > So there is no current way to specify the creation of > > a 3 disk raid-z > > array with a known missing disk? > > Can someone answer that? Or does the zpool command NOT accommodate the creation of a degraded raidz array? >can''t started degraded, but you can make it so.. If one can make a sparse file, then you''d be set. Just create the file, make a zpool out of the two disks and the file, and then drop the file from the pool _BEFORE_ copying over the data. I believe then you can add the third disk as a replacement. The gotcha (and why the sparse may be needed) is that it will only use per disk the size of the smallest disk.> > This message posted from opensolaris.org > _______________________________________________ > > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? On 12/5/06, David Bustos <David.Bustos at sun.com> wrote:> Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500: > > I currently have a 400GB disk that is full of data on a linux system. > > If I buy 2 more disks and put them into a raid-z''ed zfs under solaris, > > is there a generally accepted way to build an degraded array with the > > 2 disks, copy the data to the new filesystem, and then move the > > original disk to complete the array? > > No, because we currently can''t add disks to a raidz array. You could > create a mirror instead and then add in the other disk to make > a three-way mirror, though. > > Even doing that would be dicey if you only have a single machine, > though, since Solaris can''t natively read the popular Linux filesystems. > I believe there is freeware to do it, but nothing supported. > > > David >_______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- This message posted from opensolaris.org
Ah, did not see your follow up. Thanks. Chris On Thu, 30 Nov 2006, Cindy Swearingen wrote:> Sorry, Bart, is correct: > > If new_device is not specified, it defaults to > old_device. This form of replacement is useful after an > existing disk has failed and has been physically > replaced. In this case, the new disk may have the same > /dev/dsk path as the old device, even though it is actu- > ally a different disk. ZFS recognizes this. > > cs > > Cindy Swearingen wrote: >> One minor comment is to identify the replacement drive, like this: >> >> # zpool replace mypool2 c3t6d0 c3t7d0 >> >> Otherwise, zpool will error... >> >> cs >> >> Bart Smaalders wrote: >> >>> Krzys wrote: >>> >>>> >>>> my drive did go bad on me, how do I replace it? I am sunning solaris 10 >>>> U2 (by the way, I thought U3 would be out in November, will it be out >>>> soon? does anyone know? >>>> >>>> >>>> [11:35:14] server11: /export/home/me > zpool status -x >>>> pool: mypool2 >>>> state: DEGRADED >>>> status: One or more devices could not be opened. Sufficient replicas >>>> exist for >>>> the pool to continue functioning in a degraded state. >>>> action: Attach the missing device and online it using ''zpool online''. >>>> see: http://www.sun.com/msg/ZFS-8000-D3 >>>> scrub: none requested >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> mypool2 DEGRADED 0 0 0 >>>> raidz DEGRADED 0 0 0 >>>> c3t0d0 ONLINE 0 0 0 >>>> c3t1d0 ONLINE 0 0 0 >>>> c3t2d0 ONLINE 0 0 0 >>>> c3t3d0 ONLINE 0 0 0 >>>> c3t4d0 ONLINE 0 0 0 >>>> c3t5d0 ONLINE 0 0 0 >>>> c3t6d0 UNAVAIL 0 679 0 cannot open >>>> >>>> errors: No known data errors >>>> _______________________________________________ >>>> zfs-discuss mailing list >>>> zfs-discuss at opensolaris.org >>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >>> >>> >>> Shut down the machine, replace the drive, reboot >>> and type: >>> >>> zpool replace mypool2 c3t6d0 >>> >>> >>> On earlier versions of ZFS I found it useful to do this >>> at the login prompt; it seemed fairly memory intensive. >>> >>> - Bart >>> >>> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > !DSPAM:122,456f1b0c21174266247132! >_______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- This message posted from opensolaris.org