Jordi Espasa Clofent
2012-May-03 05:44 UTC
[zfs-discuss] autoexpand in a physical disk with 2 zpool
Hi,
I a Solaris 10 Update 10 box with 1 disk which is used for two different
zpool:
root at sct-jordi-02:~# cat /etc/release
Oracle Solaris 10 8/11 s10x_u10wos_17b X86
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights
reserved.
Assembled 23 August 2011
root at sct-jordi-02:~# echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <DEFAULT cyl 7829 alt 2 hd 255 sec 63>
/pci at 0,0/pci15ad,1976 at 10/sd at 0,0
Specify disk (enter its number): Specify disk (enter its number):
root at sct-jordi-02:~# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
opt 290M 29.5G 0 0 213 5.33K
c0t0d0s7 290M 29.5G 0 0 213 5.33K
---------- ----- ----- ----- ----- ----- -----
rpool 13.7G 16.3G 0 3 14.4K 53.8K
c0t0d0s0 13.7G 16.3G 0 3 14.4K 53.8K
---------- ----- ----- ----- ----- ----- -----
root at sct-jordi-02:~# zpool get all rpool opt
NAME PROPERTY VALUE SOURCE
opt size 29.8G -
opt capacity 0% -
opt altroot - default
opt health ONLINE -
opt guid 13450764434721172659 default
opt version 29 default
opt bootfs - default
opt delegation on default
opt autoreplace off default
opt cachefile - default
opt failmode wait default
opt listsnapshots on default
opt autoexpand off default
opt free 29.5G -
opt allocated 290M -
opt readonly off -
rpool size 30G -
rpool capacity 45% -
rpool altroot - default
rpool health ONLINE -
rpool guid 16899781381017818003 default
rpool version 29 default
rpool bootfs rpool/ROOT/s10_u10 local
rpool delegation on default
rpool autoreplace off default
rpool cachefile - default
rpool failmode continue local
rpool listsnapshots on default
rpool autoexpand on local
rpool free 16.3G -
rpool allocated 13.7G -
rpool readonly off -
Note, as you can see, the slice 0 i used for ''rpool'' and the
slice 7 is
used for ''opt''. The autoexpand propierty is enabled in
''rpool'' but is
disabled in ''opt''
This machine is a virtual one (VMware), so I can enlarge the disk easily
if I need. Let''s say I enlarge the disk 10 GB:
# Before enlarge the disk
root at sct-jordi-02:~# echo | format ; df -h
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <DEFAULT cyl 7829 alt 2 hd 255 sec 63>
/pci at 0,0/pci15ad,1976 at 10/sd at 0,0
Specify disk (enter its number): Specify disk (enter its number):
Filesystem size used avail capacity Mounted on
rpool/ROOT/s10_u10 30G 5.7G 7.3G 44% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 10G 328K 10G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap2.so.1
13G 5.7G 7.3G 44% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 10G 36K 10G 1% /tmp
swap 10G 40K 10G 1% /var/run
rpool/export 30G 32K 7.3G 1% /export
rpool/export/home 30G 31K 7.3G 1% /export/home
opt 29G 290M 29G 1% /opt
opt/zones 29G 31K 29G 1% /opt/zones
rpool 30G 42K 7.3G 1% /rpool
# After enlarge the disk 10GB
root at sct-jordi-02:~# devfsadm
root at sct-jordi-02:~# echo | format ; df -h
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <DEFAULT cyl 7829 alt 2 hd 255 sec 63>
/pci at 0,0/pci15ad,1976 at 10/sd at 0,0
Specify disk (enter its number): Specify disk (enter its number):
Filesystem size used avail capacity Mounted on
rpool/ROOT/s10_u10 30G 5.7G 7.3G 44% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 10G 328K 10G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap2.so.1
13G 5.7G 7.3G 44% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 10G 44K 10G 1% /tmp
swap 10G 40K 10G 1% /var/run
rpool/export 30G 32K 7.3G 1% /export
rpool/export/home 30G 31K 7.3G 1% /export/home
opt 29G 290M 29G 1% /opt
opt/zones 29G 31K 29G 1% /opt/zones
rpool 30G 42K 7.3G 1% /rpool
The size of rpool zpool stills the same.
How to do it?
PS. I know perfectly how to expand any zpool just adding a new device;
actually I think is even better, but that''s not the point.
--
I will face my fear. I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the fear has gone there will be nothing. Only I will remain.
Darren Kenny
2012-May-03 07:06 UTC
[zfs-discuss] autoexpand in a physical disk with 2 zpool
Hi Jordi, After ''enlarging the disk'' did you update the FDISK partitioning to see the new cylinders? And, then update the VTOC label to see the new space too? In fdisk, you will need to delete the partition, and create it again - as long as you keep the start cylinder the same, and don''t make the partition smaller (unlikely in this case) then all should be ok. Similarly for the VTOC label, you will need to edit the specific slice information. If you don''t do this the zfs will not see the new space, despite the disk being bigger. HTH, Darren. On 03/05/2012 06:44, Jordi Espasa Clofent wrote:> Hi, > > I a Solaris 10 Update 10 box with 1 disk which is used for two different > zpool: > > root at sct-jordi-02:~# cat /etc/release > Oracle Solaris 10 8/11 s10x_u10wos_17b X86 > Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights > reserved. > Assembled 23 August 2011 > > root at sct-jordi-02:~# echo | format > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c0t0d0 <DEFAULT cyl 7829 alt 2 hd 255 sec 63> > /pci at 0,0/pci15ad,1976 at 10/sd at 0,0 > Specify disk (enter its number): Specify disk (enter its number): > > root at sct-jordi-02:~# zpool iostat -v > capacity operations bandwidth > pool alloc free read write read write > ---------- ----- ----- ----- ----- ----- ----- > opt 290M 29.5G 0 0 213 5.33K > c0t0d0s7 290M 29.5G 0 0 213 5.33K > ---------- ----- ----- ----- ----- ----- ----- > rpool 13.7G 16.3G 0 3 14.4K 53.8K > c0t0d0s0 13.7G 16.3G 0 3 14.4K 53.8K > ---------- ----- ----- ----- ----- ----- ----- > > root at sct-jordi-02:~# zpool get all rpool opt > NAME PROPERTY VALUE SOURCE > opt size 29.8G - > opt capacity 0% - > opt altroot - default > opt health ONLINE - > opt guid 13450764434721172659 default > opt version 29 default > opt bootfs - default > opt delegation on default > opt autoreplace off default > opt cachefile - default > opt failmode wait default > opt listsnapshots on default > opt autoexpand off default > opt free 29.5G - > opt allocated 290M - > opt readonly off - > rpool size 30G - > rpool capacity 45% - > rpool altroot - default > rpool health ONLINE - > rpool guid 16899781381017818003 default > rpool version 29 default > rpool bootfs rpool/ROOT/s10_u10 local > rpool delegation on default > rpool autoreplace off default > rpool cachefile - default > rpool failmode continue local > rpool listsnapshots on default > rpool autoexpand on local > rpool free 16.3G - > rpool allocated 13.7G - > rpool readonly off - > > > Note, as you can see, the slice 0 i used for ''rpool'' and the slice 7 is > used for ''opt''. The autoexpand propierty is enabled in ''rpool'' but is > disabled in ''opt'' > > This machine is a virtual one (VMware), so I can enlarge the disk easily > if I need. Let''s say I enlarge the disk 10 GB: > > # Before enlarge the disk > > root at sct-jordi-02:~# echo | format ; df -h > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c0t0d0 <DEFAULT cyl 7829 alt 2 hd 255 sec 63> > /pci at 0,0/pci15ad,1976 at 10/sd at 0,0 > Specify disk (enter its number): Specify disk (enter its number): > Filesystem size used avail capacity Mounted on > rpool/ROOT/s10_u10 30G 5.7G 7.3G 44% / > /devices 0K 0K 0K 0% /devices > ctfs 0K 0K 0K 0% /system/contract > proc 0K 0K 0K 0% /proc > mnttab 0K 0K 0K 0% /etc/mnttab > swap 10G 328K 10G 1% /etc/svc/volatile > objfs 0K 0K 0K 0% /system/object > sharefs 0K 0K 0K 0% /etc/dfs/sharetab > /usr/lib/libc/libc_hwcap2.so.1 > 13G 5.7G 7.3G 44% /lib/libc.so.1 > fd 0K 0K 0K 0% /dev/fd > swap 10G 36K 10G 1% /tmp > swap 10G 40K 10G 1% /var/run > rpool/export 30G 32K 7.3G 1% /export > rpool/export/home 30G 31K 7.3G 1% /export/home > opt 29G 290M 29G 1% /opt > opt/zones 29G 31K 29G 1% /opt/zones > rpool 30G 42K 7.3G 1% /rpool > > > # After enlarge the disk 10GB > > root at sct-jordi-02:~# devfsadm > root at sct-jordi-02:~# echo | format ; df -h > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c0t0d0 <DEFAULT cyl 7829 alt 2 hd 255 sec 63> > /pci at 0,0/pci15ad,1976 at 10/sd at 0,0 > Specify disk (enter its number): Specify disk (enter its number): > Filesystem size used avail capacity Mounted on > rpool/ROOT/s10_u10 30G 5.7G 7.3G 44% / > /devices 0K 0K 0K 0% /devices > ctfs 0K 0K 0K 0% /system/contract > proc 0K 0K 0K 0% /proc > mnttab 0K 0K 0K 0% /etc/mnttab > swap 10G 328K 10G 1% /etc/svc/volatile > objfs 0K 0K 0K 0% /system/object > sharefs 0K 0K 0K 0% /etc/dfs/sharetab > /usr/lib/libc/libc_hwcap2.so.1 > 13G 5.7G 7.3G 44% /lib/libc.so.1 > fd 0K 0K 0K 0% /dev/fd > swap 10G 44K 10G 1% /tmp > swap 10G 40K 10G 1% /var/run > rpool/export 30G 32K 7.3G 1% /export > rpool/export/home 30G 31K 7.3G 1% /export/home > opt 29G 290M 29G 1% /opt > opt/zones 29G 31K 29G 1% /opt/zones > rpool 30G 42K 7.3G 1% /rpool > > The size of rpool zpool stills the same. > > How to do it? > > PS. I know perfectly how to expand any zpool just adding a new device; > actually I think is even better, but that''s not the point. > >
2012-05-03 9:44, Jordi Espasa Clofent wrote: > Note, as you can see, the slice 0 i used for ''rpool'' and the slice 7 > is used for ''opt''. The autoexpand propierty is enabled in ''rpool'' but > is disabled in ''opt'' > > This machine is a virtual one (VMware), so I can enlarge the disk > easily if I need. Let''s say I enlarge the disk 10 GB: > PS. I know perfectly how to expand any zpool just adding a new device; > actually I think is even better, but that''s not the point. The rpool has some limitations compared to other pools; for example, it can not be concatenated or striped from several locations. Each component of the rpool (single device or part of the mirror) must be self-sufficient in case of catastrophic boots. So adding a new device to rpool won''t help. As for autoexpansion, it works "in-place" - if the device which contains the pool becomes larger, the pool can increase. In case of rpool, the device is c0t0d0s0 slice. After you increase the disk, you should also use tools like format, fdisk and/or parted to increase the Solaris partition (in external MBR-table layout), then you should relocate the "opt" pool''s sectors towards end of disk while that pool is exported and not active, then relabel the Solaris slices with format so as to change the s7''s "address" and expand the s0 slice. Then the singular device of rpool would become bigger and it should autoexpand. The tricky part is relocation of opt. I think you can do this with a series of dd invokations going with chunks of say 1Gb, starting from the end of its slice (end-1Gb), because by the time you''re done, 2/3 of original opt slice would be overwritten by its own relocated data. It would likely be more simple and safe to just back up the data from your nearly empty opt pool (i.e. zfs send | zfs recv its datasets into rpool), destroy opt, relabel the solaris slices with format, expand rpool, create a new opt. You should back it up anyway before such dangerous experiments But for the sheer excitement of the experiment, you can give the dd-series a try, and tell us how it goes HTH, //Jim