Darin Perusich
2010-Aug-28 00:54 UTC
[zfs-discuss] zfs lists discrepancy after added a new vdev to pool
Hello All, I''m sure this has been discussed previously but I haven''t been able to find an answer to this. I''ve added another raidz1 vdev to an existing storage pool and the increased available storage isn''t reflected in the ''zfs list'' output. Why is this? The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel Generic_139555-08. The system does not have the lastest patches which might be the cure. Thanks! Here''s what I''m seeing. zpool create datapool raidz1 c1t50060E800042AA70d0 c1t50060E800042AA70d1 zpool status pool: datapool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datapool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t50060E800042AA70d0 ONLINE 0 0 0 c1t50060E800042AA70d1 ONLINE 0 0 0 zfs list NAME USED AVAIL REFER MOUNTPOINT datapool 108K 196G 18K /datapool zpool add datapool raidz1 c1t50060E800042AA70d2 c1t50060E800042AA70d3 zpool status pool: datapool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datapool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t50060E800042AA70d0 ONLINE 0 0 0 c1t50060E800042AA70d1 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t50060E800042AA70d2 ONLINE 0 0 0 c1t50060E800042AA70d3 ONLINE 0 0 0 zfs list NAME USED AVAIL REFER MOUNTPOINT datapool 112K 392G 18K /datapool zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT datapool 796G 471K 796G 0% ONLINE - -- Darin Perusich Unix Systems Administrator Cognigen Corporation 395 Youngs Rd. Williamsville, NY 14221 Phone: 716-633-3463 Email: darinper at cognigencorp.com
Edho P Arief
2010-Aug-28 04:27 UTC
[zfs-discuss] zfs lists discrepancy after added a new vdev to pool
On Sat, Aug 28, 2010 at 7:54 AM, Darin Perusich <Darin.Perusich at cognigencorp.com> wrote:> Hello All, > > I''m sure this has been discussed previously but I haven''t been able to find an > answer to this. I''ve added another raidz1 vdev to an existing storage pool and > the increased available storage isn''t reflected in the ''zfs list'' output. Why > is this? >you must do zpool export followed by zpool import -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
Tomas Ă–gren
2010-Aug-28 09:56 UTC
[zfs-discuss] zfs lists discrepancy after added a new vdev to pool
On 27 August, 2010 - Darin Perusich sent me these 2,1K bytes:> Hello All, > > I''m sure this has been discussed previously but I haven''t been able to find an > answer to this. I''ve added another raidz1 vdev to an existing storage pool and > the increased available storage isn''t reflected in the ''zfs list'' output. Why > is this? > > The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel > Generic_139555-08. The system does not have the lastest patches which might be > the cure. > > Thanks! > > Here''s what I''m seeing. > zpool create datapool raidz1 c1t50060E800042AA70d0 c1t50060E800042AA70d1Just fyi, this is an inefficient variant of a mirror. More cpu required and lower performance. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Mattias Pantzare
2010-Aug-28 10:04 UTC
[zfs-discuss] zfs lists discrepancy after added a new vdev to pool
On Sat, Aug 28, 2010 at 02:54, Darin Perusich <Darin.Perusich at cognigencorp.com> wrote:> Hello All, > > I''m sure this has been discussed previously but I haven''t been able to find an > answer to this. I''ve added another raidz1 vdev to an existing storage pool and > the increased available storage isn''t reflected in the ''zfs list'' output. Why > is this? > > The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel > Generic_139555-08. The system does not have the lastest patches which might be > the cure. > > Thanks! > > Here''s what I''m seeing. > zpool create datapool raidz1 c1t50060E800042AA70d0 ?c1t50060E800042AA70d1 > > zpool status > ?pool: datapool > ?state: ONLINE > ?scrub: none requested > config: > > ? ? ? ?NAME ? ? ? ? ? ? ? ? ? ? ? STATE ? ? READ WRITE CKSUM > ? ? ? ?datapool ? ? ? ? ? ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ?raidz1 ? ? ? ? ? ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ? ?c1t50060E800042AA70d0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ? ?c1t50060E800042AA70d1 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > > zfs list > NAME ? ? ? USED ?AVAIL ?REFER ?MOUNTPOINT > datapool ? 108K ? 196G ? ?18K ?/datapool > > zpool add datapool raidz1 c1t50060E800042AA70d2 c1t50060E800042AA70d3 > > zpool status > ?pool: datapool > ?state: ONLINE > ?scrub: none requested > config: > > ? ? ? ?NAME ? ? ? ? ? ? ? ? ? ? ? STATE ? ? READ WRITE CKSUM > ? ? ? ?datapool ? ? ? ? ? ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ?raidz1 ? ? ? ? ? ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ? ?c1t50060E800042AA70d0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ? ?c1t50060E800042AA70d1 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ?raidz1 ? ? ? ? ? ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ? ?c1t50060E800042AA70d2 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ? ?c1t50060E800042AA70d3 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > > zfs list > NAME ? ? ? USED ?AVAIL ?REFER ?MOUNTPOINT > datapool ? 112K ? 392G ? ?18K ?/datapoolI think you have to explain your problem more, 392G is more than 196G?
eXeC001er
2010-Aug-28 10:32 UTC
[zfs-discuss] zfs lists discrepancy after added a new vdev to pool
> On Sat, Aug 28, 2010 at 02:54, Darin Perusich > <Darin.Perusich at cognigencorp.com> wrote: > > Hello All, > > > > I''m sure this has been discussed previously but I haven''t been able to > find an > > answer to this. I''ve added another raidz1 vdev to an existing storage > pool and > > the increased available storage isn''t reflected in the ''zfs list'' output. > Why > > is this? > > > > The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel > > Generic_139555-08. The system does not have the lastest patches which > might be > > the cure. > > > > Thanks! > > > > Here''s what I''m seeing. > > zpool create datapool raidz1 c1t50060E800042AA70d0 c1t50060E800042AA70d1 > > > > zpool status > > pool: datapool > > state: ONLINE > > scrub: none requested > > config: > > > > NAME STATE READ WRITE CKSUM > > datapool ONLINE 0 0 0 > > raidz1 ONLINE 0 0 0 > > c1t50060E800042AA70d0 ONLINE 0 0 0 > > c1t50060E800042AA70d1 ONLINE 0 0 0 > > > > zfs list > > NAME USED AVAIL REFER MOUNTPOINT > > datapool 108K 196G 18K /datapool > > > > zpool add datapool raidz1 c1t50060E800042AA70d2 c1t50060E800042AA70d3 > > > > zpool status > > pool: datapool > > state: ONLINE > > scrub: none requested > > config: > > > > NAME STATE READ WRITE CKSUM > > datapool ONLINE 0 0 0 > > raidz1 ONLINE 0 0 0 > > c1t50060E800042AA70d0 ONLINE 0 0 0 > > c1t50060E800042AA70d1 ONLINE 0 0 0 > > raidz1 ONLINE 0 0 0 > > c1t50060E800042AA70d2 ONLINE 0 0 0 > > c1t50060E800042AA70d3 ONLINE 0 0 0 > > > > zfs list > > NAME USED AVAIL REFER MOUNTPOINT > > datapool 112K 392G 18K /datapool >Darin, you created ''pool''-vdev from the two ''raid-z''-vdev: result you have size_of_pool = 2 * ''raid-z''> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100828/206fc5e6/attachment.html>
Darin Perusich
2010-Aug-30 12:47 UTC
[zfs-discuss] zfs lists discrepancy after added a new vdev to pool
On Saturday, August 28, 2010 06:04:17 am Mattias Pantzare wrote:> On Sat, Aug 28, 2010 at 02:54, Darin Perusich > > <Darin.Perusich at cognigencorp.com> wrote: > > Hello All, > > > > I''m sure this has been discussed previously but I haven''t been able to > > find an answer to this. I''ve added another raidz1 vdev to an existing > > storage pool and the increased available storage isn''t reflected in the > > ''zfs list'' output. Why is this? > > > > The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel > > Generic_139555-08. The system does not have the lastest patches which > > might be the cure. > > > > Thanks! > > > > I think you have to explain your problem more, 392G is more than 196G?This is actually the wrong output, it was the end of a LONG day. Here''s the correct output. zpool create datapool raidz1 c1t50060E800042AA70d0 c1t50060E800042AA70d1 zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT datapool 398G 191K 398G 0% ONLINE - zfs list NAME USED AVAIL REFER MOUNTPOINT datapool 91K 196G 1K /datapool zpool add datapool raidz c1t50060E800042AA70d2 c1t50060E800042AA70d3 zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT datapool 796G 231K 796G 0% ONLINE - zfs list NAME USED AVAIL REFER MOUNTPOINT datapool 111K 392G 18K /datapool -- Darin Perusich Unix Systems Administrator Cognigen Corporation 395 Youngs Rd. Williamsville, NY 14221 Phone: 716-633-3463 Email: darinper at cognigencorp.com
Darin Perusich
2010-Aug-30 12:49 UTC
[zfs-discuss] zfs lists discrepancy after added a new vdev to pool
On Saturday, August 28, 2010 12:27:36 am Edho P Arief wrote:> On Sat, Aug 28, 2010 at 7:54 AM, Darin Perusich > > <Darin.Perusich at cognigencorp.com> wrote: > > Hello All, > > > > I''m sure this has been discussed previously but I haven''t been able to > > find an answer to this. I''ve added another raidz1 vdev to an existing > > storage pool and the increased available storage isn''t reflected in the > > ''zfs list'' output. Why is this? > > you must do zpool export followed by zpool importI tried this but it didn''t have any effect. -- Darin Perusich Unix Systems Administrator Cognigen Corporation 395 Youngs Rd. Williamsville, NY 14221 Phone: 716-633-3463 Email: darinper at cognigencorp.com
Darin Perusich
2010-Aug-30 13:04 UTC
[zfs-discuss] zfs lists discrepancy after added a new vdev to pool
On Saturday, August 28, 2010 05:56:27 am Tomas ?gren wrote:> On 27 August, 2010 - Darin Perusich sent me these 2,1K bytes: > > Hello All, > > > > I''m sure this has been discussed previously but I haven''t been able to > > find an answer to this. I''ve added another raidz1 vdev to an existing > > storage pool and the increased available storage isn''t reflected in the > > ''zfs list'' output. Why is this? > > > > The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel > > Generic_139555-08. The system does not have the lastest patches which > > might be the cure. > > > > Thanks! > > > > Here''s what I''m seeing. > > zpool create datapool raidz1 c1t50060E800042AA70d0 c1t50060E800042AA70d1 > > Just fyi, this is an inefficient variant of a mirror. More cpu required > and lower performance. >This is a testing setup, the production pool is currently 1 raidz1 vdev split across 6 disks. Thanks for the heads up though. -- Darin Perusich Unix Systems Administrator Cognigen Corporation 395 Youngs Rd. Williamsville, NY 14221 Phone: 716-633-3463 Email: darinper at cognigencorp.com
Richard Elling
2010-Aug-30 13:46 UTC
[zfs-discuss] zfs lists discrepancy after added a new vdev to pool
This is a FAQ Why doesn''t the space that is reported by the zpool list command and the zfs list command match? http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq -- richard On Aug 30, 2010, at 5:47 AM, Darin Perusich wrote:> > On Saturday, August 28, 2010 06:04:17 am Mattias Pantzare wrote: >> On Sat, Aug 28, 2010 at 02:54, Darin Perusich >> >> <Darin.Perusich at cognigencorp.com> wrote: >>> Hello All, >>> >>> I''m sure this has been discussed previously but I haven''t been able to >>> find an answer to this. I''ve added another raidz1 vdev to an existing >>> storage pool and the increased available storage isn''t reflected in the >>> ''zfs list'' output. Why is this? >>> >>> The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel >>> Generic_139555-08. The system does not have the lastest patches which >>> might be the cure. >>> >>> Thanks! >>> >> >> I think you have to explain your problem more, 392G is more than 196G? > > This is actually the wrong output, it was the end of a LONG day. Here''s the > correct output. > > zpool create datapool raidz1 c1t50060E800042AA70d0 c1t50060E800042AA70d1 > zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > datapool 398G 191K 398G 0% ONLINE - > > zfs list > NAME USED AVAIL REFER MOUNTPOINT > datapool 91K 196G 1K /datapool > > zpool add datapool raidz c1t50060E800042AA70d2 c1t50060E800042AA70d3 > > zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > datapool 796G 231K 796G 0% ONLINE - > > zfs list > NAME USED AVAIL REFER MOUNTPOINT > datapool 111K 392G 18K /datapool > > -- > Darin Perusich > Unix Systems Administrator > Cognigen Corporation > 395 Youngs Rd. > Williamsville, NY 14221 > Phone: 716-633-3463 > Email: darinper at cognigencorp.com > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- OpenStorage Summit, October 25-27, Palo Alto, CA http://nexenta-summit2010.eventbrite.com ZFS and performance consulting http://www.RichardElling.com