Hi, I have a problem with expanding a zpool to reflect a change in the underlying hardware LUN. I''ve created a zpool on top of a 3Ware hardware RAID volume, with a capacity of 2.7TB. I''ve since added disks to the hardware volume, expanding the capacity of the volume to 10TB. This change in capacity shows up in format: 0. c0t0d0 <AMCC-9650SE-16M DISK-4.06-10.00TB> /pci at 0,0/pci10de,375 at e/pci13c1,1004 at 0/sd at 0,0 When I do a prtvtoc /dev/dsk/c0t0d0, I get: * /dev/dsk/c0t0d0 partition map * * Dimensions: * 512 bytes/sector * 21484142592 sectors * 5859311549 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 34 222 255 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 5859294943 5859295198 8 11 00 5859295199 16384 5859311582 The new capacity, unfortunately, shows up as inaccessible. I''ve tried exporting and importing the zpool, but the capacity is still not recognized. I kept seeing things online about "Dynamic LUN Expansion", but how do I do this? -- This message posted from opensolaris.org
On Jun 3, 2009, at 19:37, Leonid Zamdborg wrote:> The new capacity, unfortunately, shows up as inaccessible. I''ve > tried exporting and importing the zpool, but the capacity is still > not recognized. I kept seeing things online about "Dynamic LUN > Expansion", but how do I do this?What OS version are you running?
I''m running 2008.11. -- This message posted from opensolaris.org
Leonid, I will be integrating this functionality within the next week: PSARC 2008/353 zpool autoexpand property 6475340 when lun expands, zfs should expand too Unfortunately, the won''t help you until they get pushed to Opensolaris. The problem you''re facing is that the partition table needs to be expanded to use the newly created space. This all happens automatically with my code changes but if you want to do this you''ll have to change the partition table and export/import the pool. Your other option is to wait till these bits show up in Opensolaris. Thanks, George Leonid Zamdborg wrote:> Hi, > > I have a problem with expanding a zpool to reflect a change in the underlying hardware LUN. I''ve created a zpool on top of a 3Ware hardware RAID volume, with a capacity of 2.7TB. I''ve since added disks to the hardware volume, expanding the capacity of the volume to 10TB. This change in capacity shows up in format: > > 0. c0t0d0 <AMCC-9650SE-16M DISK-4.06-10.00TB> > /pci at 0,0/pci10de,375 at e/pci13c1,1004 at 0/sd at 0,0 > > When I do a prtvtoc /dev/dsk/c0t0d0, I get: > > * /dev/dsk/c0t0d0 partition map > * > * Dimensions: > * 512 bytes/sector > * 21484142592 sectors > * 5859311549 accessible sectors > * > * Flags: > * 1: unmountable > * 10: read-only > * > * Unallocated space: > * First Sector Last > * Sector Count Sector > * 34 222 255 > * > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 5859294943 5859295198 > 8 11 00 5859295199 16384 5859311582 > > The new capacity, unfortunately, shows up as inaccessible. I''ve tried exporting and importing the zpool, but the capacity is still not recognized. I kept seeing things online about "Dynamic LUN Expansion", but how do I do this?
> The problem you''re facing is that the partition table > needs to be > expanded to use the newly created space. This all > happens automatically > with my code changes but if you want to do this > you''ll have to change > the partition table and export/import the pool.George, Is there a reasonably straightforward way of doing this partition table edit with existing tools that won''t clobber my data? I''m very new to ZFS, and didn''t want to start experimenting with a live machine. -- This message posted from opensolaris.org
Out of curiosity, would destroying the zpool and then importing the destroyed pool have the effect of recognizing the size change? Or does ''destroying'' a pool simply label a pool as ''destroyed'' and make no other changes... -- This message posted from opensolaris.org
Leonid Zamdborg wrote:> George, > > Is there a reasonably straightforward way of doing this partition table edit with existing tools that won''t clobber my data? I''m very new to ZFS, and didn''t want to start experimenting with a live machine. >Leonid, What you could do is to write a program which calls efi_use_whole_disk(3EXT) to re-write the label for you. Once you have a new label you will be able to export/import the pool and it will pickup the new size. BTW, the LUN expansion project was just integrated today. Thanks, George
On Sun, Jun 07, 2009 at 10:38:29AM -0700, Leonid Zamdborg wrote:> Out of curiosity, would destroying the zpool and then importing the > destroyed pool have the effect of recognizing the size change? Or > does ''destroying'' a pool simply label a pool as ''destroyed'' and make > no other changes...It would be unnecessary. ZFS can handle size increases just fine without any more than an export/import in most cases. The problem is that the OS doesn''t always make it so simple. The label on the disk needs to be changed to reflect the correct size of the LUN, then any slice used on the disk needs to be changed to see the increase. Destroying the zpool doesn''t get the label rewritten. You can destroy the label today, create a new label, then make slice 0 start at the same location, but encompass the entire disk. When done, ZFS should import and see the new space. -- Darren
> What you could do is to write a program which calls > efi_use_whole_disk(3EXT) to re-write the label for you. Once you have a > new label you will be able to export/import the poolAwesome.. Worked for me, anyways. .C file attached Although I did a "zpool export" before opening the device and calling that function. I''m generally not one to mess with labels on a live filesystem.. -- This message posted from opensolaris.org -------------- next part -------------- A non-text attachment was scrubbed... Name: uwd.c Type: application/octet-stream Size: 1175 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090611/947c1b40/attachment.obj>