Hi, I''ve read before regarding zpool size increase by replacing the vdevs. The initial pool was a raidz2 with 4 640GB disks. I''ve replaced each disk with 1TB size by taking it out, inserting the new disk, doing cfgadm -c configure on port and zpool replace bigpool c6tXd0 The problem is the zpool size is the same (2.33TB raw) as seen below: # zpool list bigpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT bigpool 2.33T 1.41T 942G 60% 1.00x ONLINE - It should be ~ 3.8-3.9 TB, right? I''ve performed a zpool export/import, but to no avail. I''m running OpenSolaris 128a Here is the zpool status: # zpool status bigpool pool: bigpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM bigpool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c6t2d0 ONLINE 0 0 0 c6t3d0 ONLINE 0 0 0 c6t4d0 ONLINE 0 0 0 c6t5d0 ONLINE 0 0 0 errors: No known data errors and here are the disks: # format </dev/null Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c6t0d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 63> /pci at 0,0/pci8086,34de at 1f,2/disk at 0,0 1. c6t1d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 63> /pci at 0,0/pci8086,34de at 1f,2/disk at 1,0 2. c6t2d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> /pci at 0,0/pci8086,34de at 1f,2/disk at 2,0 3. c6t3d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> /pci at 0,0/pci8086,34de at 1f,2/disk at 3,0 4. c6t4d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> /pci at 0,0/pci8086,34de at 1f,2/disk at 4,0 5. c6t5d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> /pci at 0,0/pci8086,34de at 1f,2/disk at 5,0 Specify disk (enter its number): Is there something that I am missing?
Did you set autoexpand on? Conversely, did you try doing a ''zpool online bigpool <disk>'' for each disk after the replace completed? On Mon, 7 Dec 2009, Alexandru Pirvulescu wrote:> Hi, > > I''ve read before regarding zpool size increase by replacing the vdevs. > > The initial pool was a raidz2 with 4 640GB disks. > I''ve replaced each disk with 1TB size by taking it out, inserting the new disk, doing cfgadm -c configure on port and zpool replace bigpool c6tXd0 > > The problem is the zpool size is the same (2.33TB raw) as seen below: > > # zpool list bigpool > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > bigpool 2.33T 1.41T 942G 60% 1.00x ONLINE - > > It should be ~ 3.8-3.9 TB, right? > > I''ve performed a zpool export/import, but to no avail. I''m running OpenSolaris 128a > > Here is the zpool status: > > # zpool status bigpool > pool: bigpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > bigpool ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > c6t2d0 ONLINE 0 0 0 > c6t3d0 ONLINE 0 0 0 > c6t4d0 ONLINE 0 0 0 > c6t5d0 ONLINE 0 0 0 > > errors: No known data errors > > and here are the disks: > > # format </dev/null > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c6t0d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 63> > /pci at 0,0/pci8086,34de at 1f,2/disk at 0,0 > 1. c6t1d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 63> > /pci at 0,0/pci8086,34de at 1f,2/disk at 1,0 > 2. c6t2d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> > /pci at 0,0/pci8086,34de at 1f,2/disk at 2,0 > 3. c6t3d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> > /pci at 0,0/pci8086,34de at 1f,2/disk at 3,0 > 4. c6t4d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> > /pci at 0,0/pci8086,34de at 1f,2/disk at 4,0 > 5. c6t5d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> > /pci at 0,0/pci8086,34de at 1f,2/disk at 5,0 > Specify disk (enter its number): > > Is there something that I am missing? > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >Regards, markm
On Mon, Dec 7, 2009 at 3:41 PM, Alexandru Pirvulescu <sigxcpu at gmail.com> wrote:> I''ve read before regarding zpool size increase by replacing the vdevs. > > The initial pool was a raidz2 with 4 640GB disks. > I''ve replaced each disk with 1TB size by taking it out, inserting the new disk, doing cfgadm -c configure on port and zpool replace bigpool c6tXd0 > > The problem is the zpool size is the same (2.33TB raw) as seen below: > > # zpool list bigpool > NAME ? ? ?SIZE ?ALLOC ? FREE ? ?CAP ?DEDUP ?HEALTH ?ALTROOT > bigpool ?2.33T ?1.41T ? 942G ? ?60% ?1.00x ?ONLINE ?- > > It should be ~ 3.8-3.9 TB, right?An autoexpand property was added a few months ago for zpools. This needs to be turned on to enable the automatic vdev expansion. For example, # zpool set autoexpand=on bigpool Ed Plese
Thank you. That fixed the problem. All the tutorials on Internet didn''t mention autoexpand. Again, thank you everybody for the quick reply and solving my problem. Alex On Dec 7, 2009, at 11:48 PM, Ed Plese wrote:> On Mon, Dec 7, 2009 at 3:41 PM, Alexandru Pirvulescu <sigxcpu at gmail.com> wrote: >> I''ve read before regarding zpool size increase by replacing the vdevs. >> >> The initial pool was a raidz2 with 4 640GB disks. >> I''ve replaced each disk with 1TB size by taking it out, inserting the new disk, doing cfgadm -c configure on port and zpool replace bigpool c6tXd0 >> >> The problem is the zpool size is the same (2.33TB raw) as seen below: >> >> # zpool list bigpool >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> bigpool 2.33T 1.41T 942G 60% 1.00x ONLINE - >> >> It should be ~ 3.8-3.9 TB, right? > > An autoexpand property was added a few months ago for zpools. This > needs to be turned on to enable the automatic vdev expansion. For > example, > > # zpool set autoexpand=on bigpool > > > Ed Plese
Hi Alex, The SXCE Admin Guide is generally up-to-date on docs.sun.com. The section that covers the autoreplace property and default behavior is here: http://docs.sun.com/app/docs/doc/817-2271/gazgd?a=view Thanks, Cindy On 12/07/09 14:50, Alexandru Pirvulescu wrote:> Thank you. That fixed the problem. > > All the tutorials on Internet didn''t mention autoexpand. > > Again, thank you everybody for the quick reply and solving my problem. > > Alex > > On Dec 7, 2009, at 11:48 PM, Ed Plese wrote: > >> On Mon, Dec 7, 2009 at 3:41 PM, Alexandru Pirvulescu <sigxcpu at gmail.com> wrote: >>> I''ve read before regarding zpool size increase by replacing the vdevs. >>> >>> The initial pool was a raidz2 with 4 640GB disks. >>> I''ve replaced each disk with 1TB size by taking it out, inserting the new disk, doing cfgadm -c configure on port and zpool replace bigpool c6tXd0 >>> >>> The problem is the zpool size is the same (2.33TB raw) as seen below: >>> >>> # zpool list bigpool >>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >>> bigpool 2.33T 1.41T 942G 60% 1.00x ONLINE - >>> >>> It should be ~ 3.8-3.9 TB, right? >> An autoexpand property was added a few months ago for zpools. This >> needs to be turned on to enable the automatic vdev expansion. For >> example, >> >> # zpool set autoexpand=on bigpool >> >> >> Ed Plese > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss