I get the following output when i run a zpool status , but i am a little confused of why c9t8d0 is more "left align" then the rest of the disks in the pool , what does it mean ? $ zpool status blmpool pool: blmpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM blmpool ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c9t0d0 ONLINE 0 0 0 c9t1d0 ONLINE 0 0 0 c9t3d0 ONLINE 0 0 0 c9t4d0 ONLINE 0 0 0 c9t5d0 ONLINE 0 0 0 c9t6d0 ONLINE 0 0 0 c9t7d0 ONLINE 0 0 0 c9t8d0 ONLINE 0 0 0 -- This message posted from opensolaris.org
On 27 May, 2010 - Per Jorgensen sent me these 1,0K bytes:> I get the following output when i run a zpool status , but i am a > little confused of why c9t8d0 is more "left align" then the rest of > the disks in the pool , what does it mean ?Because someone forced it in without redundancy (or created it as such). Your pool is "bad", as c9t8d0 is without redundancy. If it fails, your pool is toast. zpool history should be able to tell when it happened at least.> $ zpool status blmpool > pool: blmpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > blmpool ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > c9t0d0 ONLINE 0 0 0 > c9t1d0 ONLINE 0 0 0 > c9t3d0 ONLINE 0 0 0 > c9t4d0 ONLINE 0 0 0 > c9t5d0 ONLINE 0 0 0 > c9t6d0 ONLINE 0 0 0 > c9t7d0 ONLINE 0 0 0 > c9t8d0 ONLINE 0 0 0 > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss/Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
On May 27, 2010, at 12:37 PM, Per Jorgensen wrote:> I get the following output when i run a zpool status , but i am a little confused of why c9t8d0 is more "left align" then the rest of the disks in the pool , what does it mean ?It means that is is another top-level vdev in your pool. Basically you have two top-level vdevs: one is your raidz2 vdev containing 7 disks, and another one single disk top-level vdev c9t8d0. I guess it was added like this "zpool add -f blmpool c9t8d0". Without -f it would complain about mismatching replication levels. You can check pool history to see when exactly it was done.> > $ zpool status blmpool > pool: blmpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > blmpool ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > c9t0d0 ONLINE 0 0 0 > c9t1d0 ONLINE 0 0 0 > c9t3d0 ONLINE 0 0 0 > c9t4d0 ONLINE 0 0 0 > c9t5d0 ONLINE 0 0 0 > c9t6d0 ONLINE 0 0 0 > c9t7d0 ONLINE 0 0 0 > c9t8d0 ONLINE 0 0 0 > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
thanks for the quick responses and yes the history show just what you said :( is there a way i can get c9t8d0 out of the pool , or how do i get the pool back to optimal redundancy ? -- This message posted from opensolaris.org
On 05/27/10 09:16 PM, Per Jorgensen wrote:> thanks for the quick responses and yes the history show just what you said :( > > is there a way i can get c9t8d0 out of the pool , or how do i get the pool back to optimal redundancy ? >No, you will have to destroy the pool and start over. Or if that isn''t an option, attach a mirror deive to c9t8d0. -- Ian.
On Thu, May 27, 2010 at 2:16 AM, Per Jorgensen <pej at combox.dk> wrote:> is there a way i can get c9t8d0 out of the pool , or how do i get the pool back to optimal redundancy ?It''s not possible to remove vdevs right now. When the mythical bp_rewrite shows up, then you can. For now, the only thing you can do to save your pool is attach another disk (or two) as a mirror. -B -- Brandon High : bhigh at freaks.com