Hello everybody, I have asked the same question in the freebsd forums, but had no luck. Apart of this, there might be a bug somewhere, so I re-ask the question to this list. Here how it goes (three posts): post 1: "I am experimenting with one installation of FreeBSD-9-STABLE/amd64 on a VirtualBox that is using gptzfsboot on a raid-1 (mirrored) zfs pool. My problem is that I need to grow the filesystem size of zfs partitions. I followed this guide <http://support.freenas.org/ticket/342>(http://support.freenas.org/ticket/342), which is for FreeNAS, and encountered a few problems. # gpart show => 34 40959933 ada0 GPT (19G) 34 128 1 freebsd-boot (64k) 162 35651584 2 freebsd-zfs (17G) 35651746 5308221 3 freebsd-swap (2.5G) => 34 40959933 ada1 GPT (19G) 34 128 1 freebsd-boot (64k) 162 35651584 2 freebsd-zfs (17G) 35651746 5308221 3 freebsd-swap (2.5G) # zpool status pool: zroot state: ONLINE scan: resilvered 912M in 1h3m with 0 errors on Sat Mar 10 14:01:17 2012 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 ada1p2 ONLINE 0 0 0 errors: No known data errors # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 5.97G 3.69G 2.28G 61% 1.00x ONLINE - Let me give you a few info with regard to my setup, before explaining my problems: As you can see, *gpart* shows that my ada0p2 and ada1p2 partitions (used in zroot) are of size 17G, while *zfs list* shows that zroot has a size of 5.97G (which is the initial size of the virtual machine's disks, before I resized them). The problem I encountered when following the aforementioned procedure, was that I was unable to export zroot (the procedure says to export the pool, "resize" the paritions with *gparted*, and then import the pool), because I was receiving a message of some of my filesystems being busy (in single user mode, "/" was busy). Thus, in order to resolve this issue, I booted with a CDROM of FreeBSD 9 RELEASE, I then imported (*-f*) my zpool, and followed the procedure of resizing my filesystems. Does anyone have a better idea as to what I should do in order to make *zpool* see all the available space of the partitions it is using? Thank you all for your time in advance, mamalos" post 2: "Ah, and not to forget: I have enabled the autoexpand property of *zpool* (to be honest I've enabled, disabled, reenabled, and so forth many times, because somewhere I read that it might be needed, sometimes...), with no luck." post 3: "Since nobody has an answer that far, let me ask another thing. Instead of deleting ada0p2 and ada1p2, and then recreating them from the same starting block but with a grater size, could I have just created two new filesystems (ada0p3 and ada1p3), and having them added in the pool as a new mirror? Because if that's the case, then I could try that out, since it seems to have the same result. Not that this answers to my question, but at least it's a workaround. " As stated in these posts, it's really strange that zpool list doesn't seem to react even if I set the expand flag (or autoexpand which is the same), hence my concern whether this could be a bug. Thank you all for your time, -- George Mamalakis IT and Security Officer Electrical and Computer Engineer (Aristotle Un. of Thessaloniki), MSc (Imperial College of London) Department of Electrical and Computer Engineering Faculty of Engineering Aristotle University of Thessaloniki phone number : +30 (2310) 994379
On Thu, 15 Mar 2012 14:00+0200, George Mamalakis wrote:> Hello everybody, > > I have asked the same question in the freebsd forums, but had no luck. Apart > of this, there might be a bug somewhere, so I re-ask the question to this > list. Here how it goes (three posts): > > post 1: > > "I am experimenting with one installation of FreeBSD-9-STABLE/amd64 on a > VirtualBox that is using gptzfsboot on a raid-1 (mirrored) zfs pool. My > problem is that I need to grow the filesystem size of zfs partitions. I > followed this guide > <http://support.freenas.org/ticket/342>(http://support.freenas.org/ticket/342), > which is for FreeNAS, and encountered a few problems. > > # gpart show > => 34 40959933 ada0 GPT (19G) > 34 128 1 freebsd-boot (64k) > 162 35651584 2 freebsd-zfs (17G) > 35651746 5308221 3 freebsd-swap (2.5G) > > => 34 40959933 ada1 GPT (19G) > 34 128 1 freebsd-boot (64k) > 162 35651584 2 freebsd-zfs (17G) > 35651746 5308221 3 freebsd-swap (2.5G)There's one mistake I'd point out. Your ZFS partitions are followed by your swap partitions. It would be a lot easier if the ZFS partitions were the last one on each drive. Since your are using VirtualBox, I would simply create a new pair of virtual drives with the desired sizes and attach these to your VM. Next, create new boot, swap, and ZFS partitions, in this particular order, on the new drives. Create a ZFS pool using the new ZFS partitions on the new drives, and transfer the old system from the old drives to the new drives, using a recursive snapshot and the zfs send/receive commands. Remember to set the bootfs property on the newly created ZFS pool prior to reboot.> # zpool status > pool: zroot > state: ONLINE > scan: resilvered 912M in 1h3m with 0 errors on Sat Mar 10 14:01:17 2012 > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > ada0p2 ONLINE 0 0 0 > ada1p2 ONLINE 0 0 0 > > errors: No known data errors > > # zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zroot 5.97G 3.69G 2.28G 61% 1.00x ONLINE - > > Let me give you a few info with regard to my setup, before explaining my > problems: > As you can see, *gpart* shows that my ada0p2 and ada1p2 partitions (used in > zroot) are of size 17G, while *zfs list* shows that zroot has a size of 5.97G > (which is the initial size of the virtual machine's disks, before I resized > them). > > The problem I encountered when following the aforementioned procedure, was > that I was unable to export zroot (the procedure says to export the pool, > "resize" the paritions with *gparted*, and then import the pool), because I > was receiving a message of some of my filesystems being busy (in single user > mode, "/" was busy). Thus, in order to resolve this issue, I booted with a > CDROM of FreeBSD 9 RELEASE, I then imported (*-f*) my zpool, and followed the > procedure of resizing my filesystems. > > Does anyone have a better idea as to what I should do in order to make *zpool* > see all the available space of the partitions it is using? > > Thank you all for your time in advance, > > mamalos" > > post 2: > > "Ah, > > and not to forget: I have enabled the autoexpand property of *zpool* (to be > honest I've enabled, disabled, reenabled, and so forth many times, because > somewhere I read that it might be needed, sometimes...), with no luck." > > post 3: > > "Since nobody has an answer that far, let me ask another thing. Instead of > deleting ada0p2 and ada1p2, and then recreating them from the same starting > block but with a grater size, could I have just created two new filesystems > (ada0p3 and ada1p3), and having them added in the pool as a new mirror? > Because if that's the case, then I could try that out, since it seems to have > the same result. > > Not that this answers to my question, but at least it's a workaround. " > > As stated in these posts, it's really strange that zpool list doesn't seem to > react even if I set the expand flag (or autoexpand which is the same), hence > my concern whether this could be a bug. > > Thank you all for your time,-- +-------------------------------+------------------------------------+ | Vennlig hilsen, | Best regards, | | Trond Endrest?l, | Trond Endrest?l, | | IT-ansvarlig, | System administrator, | | Fagskolen Innlandet, | Gj?vik Technical College, Norway, | | tlf. dir. 61 14 54 39, | Office.....: +47 61 14 54 39, | | tlf. mob. 952 62 567, | Cellular...: +47 952 62 567, | | sentralbord 61 14 54 00. | Switchboard: +47 61 14 54 00. | +-------------------------------+------------------------------------+
On 15.03.12 14:00, George Mamalakis wrote:> Hello everybody, > > I have asked the same question in the freebsd forums, but had no luck. > Apart of this, there might be a bug somewhere, so I re-ask the > question to this list. Here how it goes (three posts): > > post 1: > > "I am experimenting with one installation of FreeBSD-9-STABLE/amd64 on > a VirtualBox that is using gptzfsboot on a raid-1 (mirrored) zfs pool. > My problem is that I need to grow the filesystem size of zfs > partitions. I followed this guide > <http://support.freenas.org/ticket/342>(http://support.freenas.org/ticket/342), > which is for FreeNAS, and encountered a few problems. >You are using FreeBSD 9, while you follow instructions for FreeNAS. The ZFS version support in FreeNAS is way behind that in FreeBSD 9. In particular, ZFS v28, which is in FreeBSD 9 supports the autoexpand property, that does not require export/import of the pool. You mentioned you did use autoexpand, but perhaps didn't really 'expand' the zpool vdevs.> > "Since nobody has an answer that far, let me ask another thing. > Instead of deleting ada0p2 and ada1p2, and then recreating them from > the same starting block but with a grater size, could I have just > created two new filesystems (ada0p3 and ada1p3), and having them added > in the pool as a new mirror? Because if that's the case, then I could > try that out, since it seems to have the same result. >The proper way to expand an zpool is by replacing each underlying storage device with a larger one. Replacing is the key word here. ZFS will not care if the same device suddenly became larger (if I am not mistaken). So, to expand your zpool you have basically two options: 1. Detach one of the mirror members, make sure you clear ZFS metadata from the beginning and the end of the partition, recreate your partitions with larger sizes, then attach to the mirror. After resilvering and possibly verifying with scrub that you are not losing data, repeat with the other mirror member. If you have autoexpand=yes set, your zpool should grow. 2. If you can add more devices, then add a larger size device to the mirror. Or if you can, add two new larger devices to the mirror. After completing this, remove the old, smaller mirror members. As with the other option, your zpool should grow. Do not add another mirror to the zpool, if you want to remove the old devices. Doing so will create second vdev in the zpool and will spread data over both (raid0 or stripe). Current versions of ZFS cannot remove vdevs from an zpool so in order to remove the smaller devices, you will have to backup, recreate the zpool and restore data. Daniel
On Thu, Mar 15, 2012 at 02:00:48PM +0200, George Mamalakis wrote:> Hello everybody, > > I have asked the same question in the freebsd forums, but had no luck. > Apart of this, there might be a bug somewhere, so I re-ask the question > to this list. Here how it goes (three posts): > > post 1: > > "I am experimenting with one installation of FreeBSD-9-STABLE/amd64 on a > VirtualBox that is using gptzfsboot on a raid-1 (mirrored) zfs pool. My > problem is that I need to grow the filesystem size of zfs partitions. I > followed this guide > <http://support.freenas.org/ticket/342>(http://support.freenas.org/ticket/342), > which is for FreeNAS, and encountered a few problems. > > # gpart show > => 34 40959933 ada0 GPT (19G) > 34 128 1 freebsd-boot (64k) > 162 35651584 2 freebsd-zfs (17G) > 35651746 5308221 3 freebsd-swap (2.5G) > > => 34 40959933 ada1 GPT (19G) > 34 128 1 freebsd-boot (64k) > 162 35651584 2 freebsd-zfs (17G) > 35651746 5308221 3 freebsd-swap (2.5G) > > # zpool status > pool: zroot > state: ONLINE > scan: resilvered 912M in 1h3m with 0 errors on Sat Mar 10 14:01:17 2012 > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > ada0p2 ONLINE 0 0 0 > ada1p2 ONLINE 0 0 0 > > errors: No known data errors > > # zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zroot 5.97G 3.69G 2.28G 61% 1.00x ONLINE - > > Let me give you a few info with regard to my setup, before explaining my > problems: > As you can see, *gpart* shows that my ada0p2 and ada1p2 partitions (used > in zroot) are of size 17G, while *zfs list* shows that zroot has a size > of 5.97G (which is the initial size of the virtual machine's disks, > before I resized them). > > The problem I encountered when following the aforementioned procedure, > was that I was unable to export zroot (the procedure says to export the > pool, "resize" the paritions with *gparted*, and then import the pool), > because I was receiving a message of some of my filesystems being busy > (in single user mode, "/" was busy). Thus, in order to resolve this > issue, I booted with a CDROM of FreeBSD 9 RELEASE, I then imported > (*-f*) my zpool, and followed the procedure of resizing my filesystems. > > Does anyone have a better idea as to what I should do in order to make > *zpool* see all the available space of the partitions it is using? > > Thank you all for your time in advance, > > mamalos" > > post 2: > > "Ah, > > and not to forget: I have enabled the autoexpand property of *zpool* (to > be honest I've enabled, disabled, reenabled, and so forth many times, > because somewhere I read that it might be needed, sometimes...), with no > luck." > > post 3: > > "Since nobody has an answer that far, let me ask another thing. Instead > of deleting ada0p2 and ada1p2, and then recreating them from the same > starting block but with a grater size, could I have just created two new > filesystems (ada0p3 and ada1p3), and having them added in the pool as a > new mirror? Because if that's the case, then I could try that out, since > it seems to have the same result. > > Not that this answers to my question, but at least it's a workaround. " > > As stated in these posts, it's really strange that zpool list doesn't > seem to react even if I set the expand flag (or autoexpand which is the > same), hence my concern whether this could be a bug. > > Thank you all for your time,Hi, Have you tried offline, online -e yet? I have done what you are trying succesfully with physical larger drives. When I understand your layout right, you should be able to do the following: (as root) zpool offline zroot ada0p2 zpool online -e zroot ada0p2 # Wait till everything settles and looks okay again, monitoring zpool # status # After all is okay again: zpool offline zroot ada1p2 zpool online -e zroot ada1p2 At this point your zpool should have grown to the size of its underlying partitions. It worked for me, my system was 8-STABLE at the time. The very same system has been upgrade to 9.0-RELEASE in the mean time, without any problems. Marco -- Gisteren is het niet gelukt.