Habony, Zsolt
2012-Jul-25 15:49 UTC
[zfs-discuss] online increase of zfs after LUN increase ?
Hello, There is a feature of zfs (autoexpand, or zpool online -e ) that it can consume the increased LUN immediately and increase the zpool size. That would be a very useful ( vital ) feature in enterprise environment. Though when I tried to use it, it did not work. LUN expanded and visible in format, but zpool did not increase. I found a bug SUNBUG:6430818 (Solaris Does Not Automatically Handle an Increase in LUN Size) Bad luck. Patch exists: 148098 but _not_ part of recommended patch set. Thus my fresh install Sol 10 U9 with latest patch set still has the problem. ( Strange that this problem is not considered high impact ... ) It mentiones a workaround : zpool export, "Re-label the LUN using format(1m) command.", zpool import Can you pls. help in that, what does that re-label mean ? (As I need to ask downtime for the zone now ... , would like to prepare for what I need to do ) I used format utility in thousands of times, for organizing partitions, though I have no idea how I would "relabel" a disk. Also I did not use format to label the disks, I gave the LUN to zpool directly, I would not dare to touch or resize any partition with format utility, not knowing what zpool wants to see there. Have you experienced such problem, and do you know how to increase zpool after a LUN increase ? Thank you in advance, Zsolt Habony
Sašo Kiselkov
2012-Jul-25 16:06 UTC
[zfs-discuss] online increase of zfs after LUN increase ?
On 07/25/2012 05:49 PM, Habony, Zsolt wrote:> Hello, > There is a feature of zfs (autoexpand, or zpool online -e ) that it can consume the increased LUN immediately and increase the zpool size. > That would be a very useful ( vital ) feature in enterprise environment. > > Though when I tried to use it, it did not work. LUN expanded and visible in format, but zpool did not increase. > I found a bug SUNBUG:6430818 (Solaris Does Not Automatically Handle an Increase in LUN Size) > Bad luck. > > Patch exists: 148098 but _not_ part of recommended patch set. Thus my fresh install Sol 10 U9 with latest patch set still has the problem. ( Strange that this problem > is not considered high impact ... ) > > It mentiones a workaround : zpool export, "Re-label the LUN using format(1m) command.", zpool import > > Can you pls. help in that, what does that re-label mean ? > (As I need to ask downtime for the zone now ... , would like to prepare for what I need to do ) > > I used format utility in thousands of times, for organizing partitions, though I have no idea how I would "relabel" a disk. > Also I did not use format to label the disks, I gave the LUN to zpool directly, I would not dare to touch or resize any partition with format utility, not knowing what zpool wants to see there. > > Have you experienced such problem, and do you know how to increase zpool after a LUN increase ?"Relabel" means simply running the labeling command in the format utility after you''ve made changes to the slices. As long as you keep the start cluster of a slice the same and don''t shrink it, nothing bad should happen. Are you doing this on a root pool? Cheers, -- Saso
Cindy Swearingen
2012-Jul-25 16:35 UTC
[zfs-discuss] online increase of zfs after LUN increase ?
Hi-- Patches are available to fix this so I would suggest that you request them from MOS support. This fix fell through the cracks and we tried really hard to get it in the current Solaris 10 release but sometimes things don''t work in your favor. The patches are available though. Relabeling disks on a live pool is not a recommended practice so let''s review other options but first some questions: 1. Is this a redundant pool? 2. Do you have an additional LUN (equivalent size) that you could use as a spare? What you could do is replace this existing LUN with a larger LUN, if available. Then, reattach the original LUN and detach the spare LUN but this depends on your pool configuration. If requesting the patches is not possible and you don''t have a spare LUN, then please contact me directly. I might be able to walk you through a more manual process. Thanks, Cindy On 07/25/12 09:49, Habony, Zsolt wrote:> Hello, > There is a feature of zfs (autoexpand, or zpool online -e ) that it can consume the increased LUN immediately and increase the zpool size. > That would be a very useful ( vital ) feature in enterprise environment. > > Though when I tried to use it, it did not work. LUN expanded and visible in format, but zpool did not increase. > I found a bug SUNBUG:6430818 (Solaris Does Not Automatically Handle an Increase in LUN Size) > Bad luck. > > Patch exists: 148098 but _not_ part of recommended patch set. Thus my fresh install Sol 10 U9 with latest patch set still has the problem. ( Strange that this problem > is not considered high impact ... ) > > It mentiones a workaround : zpool export, "Re-label the LUN using format(1m) command.", zpool import > > Can you pls. help in that, what does that re-label mean ? > (As I need to ask downtime for the zone now ... , would like to prepare for what I need to do ) > > I used format utility in thousands of times, for organizing partitions, though I have no idea how I would "relabel" a disk. > Also I did not use format to label the disks, I gave the LUN to zpool directly, I would not dare to touch or resize any partition with format utility, not knowing what zpool wants to see there. > > Have you experienced such problem, and do you know how to increase zpool after a LUN increase ? > > Thank you in advance, > Zsolt Habony > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Habony, Zsolt
2012-Jul-25 22:14 UTC
[zfs-discuss] online increase of zfs after LUN increase ?
Thank you for your replies. First, sorry for misleading info. Patch 148098-03 indeed not included in recommended set, but trying to download it shows that 147440-15 obsoletes it and 147440-19 is included in latest recommended patch set. Thus time solves the problem elsewhere. Just for fun, my case was: A standard LUN used as a zfs filesystem, no redundancy (as storage already has), and no partition is used, disk is given directly to zpool. # zpool status xxxxxxxxxxxx-oraarch pool: xxxxxxxxxxxx-oraarch state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM xxxxxxxxxxxxxx-oraarch ONLINE 0 0 0 c5t60060E800570B900000070B900006547d0 ONLINE 0 0 0 errors: No known data errors Partitioning shows this. partition> pr Current partition table (original): Total disk sectors available: 41927902 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 256 19.99GB 41927902 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 41927903 8.00MB 41944286 As I mentioned I did not partition it, "zpool create" did. I had absolutely no idea how to resize these partitions, where to get the available number of sectors and how many should be skipped and reserved ... Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) , restored data. Partition looks like this now, I do not think I could have created it easily manually. partition> pr Current partition table (original): Total disk sectors available: 209700062 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 256 99.99GB 209700062 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 209700063 8.00MB 209716446 Thank you for your help. Zsolt Habony
Cindy Swearingen
2012-Jul-26 00:09 UTC
[zfs-discuss] online increase of zfs after LUN increase ?
Hi-- I guess I can''t begin to understand patching. Yes, you provided a whole disk to zpool create but it actually creates a part(ition) 0 as you can see in the output below. Part Tag Flag First Sector Size Last Sector 0 usr wm 256 19.99GB 41927902 Part Tag Flag First Sector Size Last Sector 0 usr wm 256 99.99GB 209700062 I''m sorry you had to recreate the pool. This *is* a must-have feature and it is working as designed in Solaris 11 and with patch 148098-3 (or whatever the equivalent is) in Solaris 10 as well. Maybe its time for me to recheck this feature in current Solaris 10 bits. Thanks, Cindy On 07/25/12 16:14, Habony, Zsolt wrote:> Thank you for your replies. > > First, sorry for misleading info. Patch 148098-03 indeed not included in recommended set, but trying to download it shows that 147440-15 obsoletes it > and 147440-19 is included in latest recommended patch set. > Thus time solves the problem elsewhere. > > Just for fun, my case was: > > A standard LUN used as a zfs filesystem, no redundancy (as storage already has), and no partition is used, disk is given directly to zpool. > # zpool status xxxxxxxxxxxx-oraarch > pool: xxxxxxxxxxxx-oraarch > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > xxxxxxxxxxxxxx-oraarch ONLINE 0 0 0 > c5t60060E800570B900000070B900006547d0 ONLINE 0 0 0 > > errors: No known data errors > > Partitioning shows this. > > partition> pr > Current partition table (original): > Total disk sectors available: 41927902 + 16384 (reserved sectors) > > Part Tag Flag First Sector Size Last Sector > 0 usr wm 256 19.99GB 41927902 > 1 unassigned wm 0 0 0 > 2 unassigned wm 0 0 0 > 3 unassigned wm 0 0 0 > 4 unassigned wm 0 0 0 > 5 unassigned wm 0 0 0 > 6 unassigned wm 0 0 0 > 8 reserved wm 41927903 8.00MB 41944286 > > > As I mentioned I did not partition it, "zpool create" did. I had absolutely no idea how to resize these partitions, where to get the available number of sectors and how many should be skipped and reserved ... > Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) , restored data. > > Partition looks like this now, I do not think I could have created it easily manually. > > partition> pr > Current partition table (original): > Total disk sectors available: 209700062 + 16384 (reserved sectors) > > Part Tag Flag First Sector Size Last Sector > 0 usr wm 256 99.99GB 209700062 > 1 unassigned wm 0 0 0 > 2 unassigned wm 0 0 0 > 3 unassigned wm 0 0 0 > 4 unassigned wm 0 0 0 > 5 unassigned wm 0 0 0 > 6 unassigned wm 0 0 0 > 8 reserved wm 209700063 8.00MB 209716446 > > Thank you for your help. > Zsolt Habony > > >
Hung-Sheng Tsao (LaoTsao) Ph.D
2012-Jul-26 12:48 UTC
[zfs-discuss] online increase of zfs after LUN increase ?
imho, the 147440-21 does not list the bugs that solved by 148098- even through it obsoletes the 148098 Sent from my iPad On Jul 25, 2012, at 18:14, "Habony, Zsolt" <zsolt.habony at hp.com> wrote:> Thank you for your replies. > > First, sorry for misleading info. Patch 148098-03 indeed not included in recommended set, but trying to download it shows that 147440-15 obsoletes it > and 147440-19 is included in latest recommended patch set. > Thus time solves the problem elsewhere. > > Just for fun, my case was: > > A standard LUN used as a zfs filesystem, no redundancy (as storage already has), and no partition is used, disk is given directly to zpool. > # zpool status xxxxxxxxxxxx-oraarch > pool: xxxxxxxxxxxx-oraarch > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > xxxxxxxxxxxxxx-oraarch ONLINE 0 0 0 > c5t60060E800570B900000070B900006547d0 ONLINE 0 0 0 > > errors: No known data errors > > Partitioning shows this. > > partition> pr > Current partition table (original): > Total disk sectors available: 41927902 + 16384 (reserved sectors) > > Part Tag Flag First Sector Size Last Sector > 0 usr wm 256 19.99GB 41927902 > 1 unassigned wm 0 0 0 > 2 unassigned wm 0 0 0 > 3 unassigned wm 0 0 0 > 4 unassigned wm 0 0 0 > 5 unassigned wm 0 0 0 > 6 unassigned wm 0 0 0 > 8 reserved wm 41927903 8.00MB 41944286 > > > As I mentioned I did not partition it, "zpool create" did. I had absolutely no idea how to resize these partitions, where to get the available number of sectors and how many should be skipped and reserved ... > Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) , restored data. > > Partition looks like this now, I do not think I could have created it easily manually. > > partition> pr > Current partition table (original): > Total disk sectors available: 209700062 + 16384 (reserved sectors) > > Part Tag Flag First Sector Size Last Sector > 0 usr wm 256 99.99GB 209700062 > 1 unassigned wm 0 0 0 > 2 unassigned wm 0 0 0 > 3 unassigned wm 0 0 0 > 4 unassigned wm 0 0 0 > 5 unassigned wm 0 0 0 > 6 unassigned wm 0 0 0 > 8 reserved wm 209700063 8.00MB 209716446 > > Thank you for your help. > Zsolt Habony > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Habony, Zsolt
2012-Jul-26 13:04 UTC
[zfs-discuss] online increase of zfs after LUN increase ?
There is bug what I mentioned: SUNBUG:6430818 Solaris Does Not Automatically Handle an Increase in LUN Size Patch for that is: 148098-03 Its readme says: Synopsis: Obsoleted by: 147440-15 SunOS 5.10: scsi patch Looking at current version 147440-21, there is reference for the incorporated patch, and for the bug id as well. (from 148098-03) 6228435 undecoded command in var/adm/messages - Error for Command: undecoded cmd 0x5a 6241086 format should allow label adjustment when disk/LUN size changes 6430818 Solaris needs mechanism of dynamically increasing LUN size>-----Original Message----- >From: Hung-Sheng Tsao (LaoTsao) Ph.D [mailto:laotsao at gmail.com] >Sent: 2012. j?lius 26. 14:49 >To: Habony, Zsolt >Cc: Cindy Swearingen; Sa?o Kiselkov; zfs-discuss at opensolaris.org >Subject: Re: [zfs-discuss] online increase of zfs after LUN increase ? > >imho, the 147440-21 does not list the bugs that solved by 148098- even through it obsoletes the 148098
Cindy Swearingen
2012-Aug-01 18:00 UTC
[zfs-discuss] online increase of zfs after LUN increase ?
Hi-- If the S10 patch is installed on this system... Can you remind us if you ran the zpool online -e command after the LUN is expanded and the autoexpand propery is set? I hear that some storage that doesn''t generate the correct codes in response to a LUN expansion so you might need to run this command even if autoexpand is set. Thanks, Cindy On 07/26/12 07:04, Habony, Zsolt wrote:> There is bug what I mentioned: SUNBUG:6430818 Solaris Does Not Automatically Handle an Increase in LUN Size > Patch for that is: 148098-03 > > Its readme says: > Synopsis: Obsoleted by: 147440-15 SunOS 5.10: scsi patch > > Looking at current version 147440-21, there is reference for the incorporated patch, and for the bug id as well. > > (from 148098-03) > > 6228435 undecoded command in var/adm/messages - Error for Command: undecoded cmd 0x5a > 6241086 format should allow label adjustment when disk/LUN size changes > 6430818 Solaris needs mechanism of dynamically increasing LUN size > >> -----Original Message----- >> From: Hung-Sheng Tsao (LaoTsao) Ph.D [mailto:laotsao at gmail.com] >> Sent: 2012. j?lius 26. 14:49 >> To: Habony, Zsolt >> Cc: Cindy Swearingen; Sa?o Kiselkov; zfs-discuss at opensolaris.org >> Subject: Re: [zfs-discuss] online increase of zfs after LUN increase ? >> >> imho, the 147440-21 does not list the bugs that solved by 148098- even through it obsoletes the 148098 > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Habony, Zsolt
2012-Aug-01 21:09 UTC
[zfs-discuss] online increase of zfs after LUN increase ?
Hello, I have run "zpool online -e" only. I have not set autoexpand property, (as it is not set by default.) (My understanding was that a controlled way of expansion is "zpool online -e", where you decide when to increase actually, and a "non-controlled" fully automatic way was setting autoexpand on.) I have no detailed description of the bug, as I have no access to internal bug database, though it looked like LUN size change is visible for Solaris, (format indeed showed a bigger size for me ), but vtoc, and partition sizes remain the old small sizes. And I would have had to resize partitions manually. Zsolt ________________________________________ From: Cindy Swearingen [cindy.swearingen at oracle.com] Sent: Wednesday, August 01, 2012 8:00 PM To: Habony, Zsolt Cc: Hung-Sheng Tsao (LaoTsao) Ph.D; zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] online increase of zfs after LUN increase ? Hi-- If the S10 patch is installed on this system... Can you remind us if you ran the zpool online -e command after the LUN is expanded and the autoexpand propery is set? I hear that some storage that doesn''t generate the correct codes in response to a LUN expansion so you might need to run this command even if autoexpand is set. Thanks, Cindy
Seemingly Similar Threads
- How to resize ZFS partion or add a new one?
- expand zfs for OpenSolaris running inside vm
- OCFS file system used as archived redo destination is corrupted
- cannot replace c10t0d0 with c10t0d0: device is too small
- how do I revert back from ZFS partitioned disk to original partitions