Hi, I am trying to find a way to grow the filesystems in a thumper..... The idea is to take single disks offline and to replace them by bigger ones. For this reason, I did run the following test: mkfile 100m f1 mkfile 100m f2 mkfile 100m f3 mkfile 100m f4 mkfile 100m f5 mkfile 200m F1 mkfile 200m F2 mkfile 200m F3 mkfile 200m F3 mkfile 200m F4 zpool create test raidz2 /export/home/tmp/Z/f* create some files ... zpool offline test /export/home/tmp/Z/f5 zpool replace test /export/home/tmp/Z/f5 /export/home/tmp/Z/F5 zpool offline test /export/home/tmp/Z/f4 zpool replace test /export/home/tmp/Z/f4 /export/home/tmp/Z/F4 zpool offline test /export/home/tmp/Z/f3 zpool replace test /export/home/tmp/Z/f3 /export/home/tmp/Z/F3 zpool offline test /export/home/tmp/Z/f2 zpool replace test /export/home/tmp/Z/f2 /export/home/tmp/Z/F2 zpool offline test /export/home/tmp/Z/f1 zpool replace test /export/home/tmp/Z/f1 /export/home/tmp/Z/F1 After that, the zpool did notice that there is more space: zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT test 476M 1,28M 475M 0% ONLINE - the ZFS however did not grow: zfs list NAME USED AVAIL REFER MOUNTPOINT test 728K 251M 297K /test How can I tell the ZFS that it should use the whole new space? J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
Hi J?rg, On Tuesday 02 February 2010 16:40:50 Joerg Schilling wrote:> After that, the zpool did notice that there is more space: > > zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > test 476M 1,28M 475M 0% ONLINE - >That''s the size already after the initial creation, after exporting and importing it again: # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT test 976M 252K 976M 0% ONLINE -> the ZFS however did not grow: > > zfs list > NAME USED AVAIL REFER MOUNTPOINT > test 728K 251M 297K /test ># zfs list test NAME USED AVAIL REFER MOUNTPOINT test 139K 549M 37.5K /test I think you fell into the tarp that zpool just adds up all rows, especially visible on a thumper when it''s under heavy load, the read and write operations per time slice for each vdev seem to be just the individual sums of the devices underneath. But this still does not explain why the pool is larger ater exporting and reimporting. Cheers Carsten
Carsten Aulbert <carsten.aulbert at aei.mpg.de> wrote:> Hi J?rg, > > On Tuesday 02 February 2010 16:40:50 Joerg Schilling wrote: > > After that, the zpool did notice that there is more space: > > > > zpool list > > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > > test 476M 1,28M 475M 0% ONLINE - > > > > That''s the size already after the initial creation, after exporting and > importing it again: > > # zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > test 976M 252K 976M 0% ONLINE -Mmm, it seems that I make a mistake whil interpreting the results. My zpool did not grow either. Did yours grow? If yes, what did I do wrong? J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
On 02/02/2010 16:09, Joerg Schilling wrote:> Carsten Aulbert<carsten.aulbert at aei.mpg.de> wrote: > >> Hi J?rg, >> >> On Tuesday 02 February 2010 16:40:50 Joerg Schilling wrote: >>> After that, the zpool did notice that there is more space: >>> >>> zpool list >>> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >>> test 476M 1,28M 475M 0% ONLINE - >>> >> >> That''s the size already after the initial creation, after exporting and >> importing it again: >> >> # zpool list >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> test 976M 252K 976M 0% ONLINE - > > Mmm, it seems that I make a mistake whil interpreting the results. > My zpool did not grow either. > > Did yours grow? > If yes, what did I do wrong?What does this return: zpool get autoexpand test -- Darren J Moffat
Darren J Moffat <darrenm at opensolaris.org> wrote:> > Did yours grow? > > If yes, what did I do wrong? > > What does this return: > > zpool get autoexpand testzpool get autoexpand test NAME PROPERTY VALUE SOURCE test autoexpand off default Thank you for this hint! BTW: setting autoexpand later did not help but setting it before I replaced the "media" resulted in a grown zpool and zfs. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
Hi Joerg, Eabling the autoexpand property after the disk replacement is complete should expand the pool. This looks like a bug. I can reproduce this issue with files. It seems to be working as expected for disks. See the output below. Thanks, Cindy Create pool test with 2 68 GB drives: # zpool create test c2t2d0 c2t3d0 # zpool list test NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT test 136G 126K 136G 0% 1.00x ONLINE - # zfs list test NAME USED AVAIL REFER MOUNTPOINT test 73.5K 134G 21K /test Replace 2 68 GB drive with 136 GB drives and set autoreplace: # zpool replace test c2t2d0 c0t8d0 # zpool replace test c2t3d0 c0t9d0 # zpool list test NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT test 136G 166K 136G 0% 1.00x ONLINE - # zfs list test NAME USED AVAIL REFER MOUNTPOINT test 90K 134G 21K /test # zpool set autoexpand=on test # zpool list test NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT test 273G 150K 273G 0% 1.00x ONLINE - On 02/02/10 09:22, Joerg Schilling wrote:> Darren J Moffat <darrenm at opensolaris.org> wrote: > >>> Did yours grow? >>> If yes, what did I do wrong? >> What does this return: >> >> zpool get autoexpand test > > zpool get autoexpand test > NAME PROPERTY VALUE SOURCE > test autoexpand off default > > Thank you for this hint! > > BTW: setting autoexpand later did not help but setting it before > I replaced the "media" resulted in a grown zpool and zfs. > > J?rg >
On 02/02/2010 16:48, Cindy Swearingen wrote:> Hi Joerg, > > Eabling the autoexpand property after the disk replacement is complete > should expand the pool. This looks like a bug. I can reproduce this > issue with files. It seems to be working as expected for disks. > See the output below.If you use lofi on top of files rather than files directly it works too. -- Darren J Moffat
* On 02 Feb 2010, Darren J Moffat wrote:> > zpool get autoexpand testThis seems to be a new property -- it''s not in my Solaris 10 or OpenSolaris 2009.06 systems, and they have always expanded immediately upon replacement. In what build number or official release does autoexpand appear, and does it always default to off? This will be important to know for upgrades. Thanks. -- -D. dgc at uchicago.edu NSIT University of Chicago
On 02/02/2010 17:29, David Champion wrote:> * On 02 Feb 2010, Darren J Moffat wrote: >> >> zpool get autoexpand test > > This seems to be a new property -- it''s not in my Solaris 10 or > OpenSolaris 2009.06 systems, and they have always expanded immediately > upon replacement. In what build number or official release does > autoexpand appear, and does it always default to off? This will be > important to know for upgrades.changeset: 10155:847676ec1c5b date: Mon Jun 08 10:35:50 2009 -0700 description: PSARC 2008/353 zpool autoexpand property 6475340 when lun expands, zfs should expand too 6563887 in-place replacement allows for smaller devices 6606879 should be able to grow pool without a reboot or export/import 6844090 zfs should be able to mirror to a smaller disk Which would be build 94 which given 2009.06 was build 111b it should be there. -- Darren J Moffat
On Feb 2, 2010, at 9:29 AM, David Champion wrote:> * On 02 Feb 2010, Darren J Moffat wrote: >> >> zpool get autoexpand test > > This seems to be a new property -- it''s not in my Solaris 10 or > OpenSolaris 2009.06 systems, and they have always expanded immediately > upon replacement. In what build number or official release does > autoexpand appear, and does it always default to off? This will be > important to know for upgrades.[without digging through the release notes, relying instead on grey memory :-)] This behaviour has changed twice. Long ago, the pools would autoexpand. This is a bad thing, by default, so it was changed such that the expansion would only occur on pool import (around 3-4 years ago). The autoexpand property allows you to expand without an export/import (and arrived around 18 months ago). It is not surprising that various Solaris 10 releases/patches would have one of the three behaviours. -- richard
Hi David, This feature integrated into build 117, which would be beyond your OpenSolaris 2009.06. We anticipate this feature will be available in an upcoming Solaris 10 release. You can read about it here: http://docs.sun.com/app/docs/doc/817-2271/githb?a=view ZFS Device Replacement Enhancements Thanks, cindy On 02/02/10 10:29, David Champion wrote:> * On 02 Feb 2010, Darren J Moffat wrote: >> zpool get autoexpand test > > This seems to be a new property -- it''s not in my Solaris 10 or > OpenSolaris 2009.06 systems, and they have always expanded immediately > upon replacement. In what build number or official release does > autoexpand appear, and does it always default to off? This will be > important to know for upgrades. > > Thanks. >
* On 02 Feb 2010, Richard Elling wrote:> > This behaviour has changed twice. Long ago, the pools would autoexpand. > This is a bad thing, by default, so it was changed such that the expansion > would only occur on pool import (around 3-4 years ago). The autoexpand > property allows you to expand without an export/import (and arrived around > 18 months ago). It is not surprising that various Solaris 10 releases/patches > would have one of the three behaviours.Well well, I guess it''s been a while since I actually tested this. :) Thanks, Richard. I''ll watch for autoexpand in next releases of s10/osol. -- -D. dgc at uchicago.edu NSIT University of Chicago