Im running build 91 with ZFS boot. It seems that ZFS will not allow me to add an additional partition to the current root/boot pool because it is a bootable dataset. Is this a known issue that will be fixed or a permanent limitation? -Wyllys This message posted from opensolaris.org
On Tue, Jun 10, 2008 at 11:33:36AM -0700, Wyllys Ingersoll wrote:> Im running build 91 with ZFS boot. It seems that ZFS will not allow > me to add an additional partition to the current root/boot pool > because it is a bootable dataset. Is this a known issue that will be > fixed or a permanent limitation?The current limitation is that a bootable pool be limited to one disk or one disk and a mirror. When your data is striped across multiple disks, that makes booting harder.>From a post to zfs-discuss about two months ago:... we do have plans to support booting from RAID-Z. The design is still being worked out, but it''s likely that it will involve a new kind of dataset which is replicated on each disk of the RAID-Z pool, and which contains the boot archive and other crucial files that the booter needs to read. I don''t have a projected date for when it will be available. It''s a lower priority project than getting the install support for zfs boot done. -- Darren
> On Tue, Jun 10, 2008 at 11:33:36AM -0700, Wyllys > Ingersoll wrote: > > Im running build 91 with ZFS boot. It seems that > ZFS will not allow > > me to add an additional partition to the current > root/boot pool > > because it is a bootable dataset. Is this a known > issue that will be > > fixed or a permanent limitation? > > The current limitation is that a bootable pool be > limited to one disk or > one disk and a mirror. When your data is striped > across multiple disks, > that makes booting harder. > > >From a post to zfs-discuss about two months ago: > > ... we do have plans to support booting from > RAID-Z. The design is > still being worked out, but it''s likely that it > will involve a new > kind of dataset which is replicated on each disk of > the RAID-Z pool, > and which contains the boot archive and other > crucial files that the > booter needs to read. I don''t have a projected > date for when it will > be available. It''s a lower priority project than > getting the install > support for zfs boot done. > - > DarrenIf I read you right, with little or nothing extra, that would enable growing rpool as well, since what it would really do is ensure /boot (and whatever if anything else) was mirrored even though the rest of the zpool was raidz or raidz2; which would also ensure that those critical items were _not_ spread across the stripe that would result from adding devices to an existing zpool. Of course installation and upgrade would have to be able to recognize and deal with such exotica too. Which seems to pose a problem, since having one dataset in the zpool mirrored while the rest is raidz and/or extended by a stripe implies to me that some space is more or less reserved for that purpose, or that such a dataset couldn''t be snapshotted, or both; so I suppose there might be a smaller-than-total-capacity limit on the number of BEs possible. http://en.wikipedia.org/wiki/TANSTAAFL ... This message posted from opensolaris.org
I''m not even trying to stripe it across multiple disks, I just want to add another partition (from the same physical disk) to the root pool. Perhaps that is a distinction without a difference, but my goal is to grow my root pool, not stripe it across disks or enable raid features (for now). Currently, my root pool is using c1t0d0s4 and I want to add c1t0d0s0 to the pool, but can''t. -Wyllys This message posted from opensolaris.org
Wyllys Ingersoll wrote:> I''m not even trying to stripe it across multiple disks, I just want to add another partition (from the same physical disk) to the root pool. Perhaps that is a distinction without a difference, but my goal is to grow my root pool, not stripe it across disks or enable raid features (for now). > > Currently, my root pool is using c1t0d0s4 and I want to add c1t0d0s0 to the pool, but can''t. > >DANGER: Uncharted territory!!! That said, if the space on the disk (for the 2 partitions) is contiguous (which it doesn''t appear is true in your case,) or could be made contiguous by moving some other slice out of the way, then one way you should (note: I haven''t tried this, and there is chance for human error to mess things up even if it will work - and there''s some chance it won''t work even if you do it perfect,) be able to grow the root pool by deleting the new (second) partition, and redefine the original partition to extand across the space of both partitions. Once that''s done, a zpool replace c1t0d0sX c1t0d0sX should notify ZFS that the slice is bigger, and it will grow the pool to match. You have s4 and s0, so I bet the space is not contiguous, and I''d guess the free space is earlier on the disk, not later. You might be able to get around that by mirroring s4 to s0 first then detaching s4, so that you''re only using s0 and the beginning of the disk... but that''s just more changes that could introduce problems. Needless to say, I wouldn''t try this on a system I really needed with out: 1) Really good backups! and possibly, 2) Trying it out first on a virtual machine, or different HW. Personally, unless I really wanted to prove I could do it, I''d just backup and reinstall. ;) sorry. -Kyle> -Wyllys > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
> I''m not even trying to stripe it across multiple > disks, I just want to add another partition (from the > same physical disk) to the root pool. Perhaps that > is a distinction without a difference, but my goal is > to grow my root pool, not stripe it across disks or > enable raid features (for now). > > Currently, my root pool is using c1t0d0s4 and I want > to add c1t0d0s0 to the pool, but can''t. > > -WyllysRight, that''s how it is right now (which the other guy seemed to be suggesting might change eventually, but nobody knows when because it''s just not that important compared to other things). AFAIK, if you could shrink the partition whose data is after c1t0d0s4 on the disk, you could grow c1t0d0s4 by that much, and I _think_ zfs would pick up the growth of the device automatically. (ufs partitions can be grown like that, or by being on an SVM or VxVM volume that''s grown, but then one has to run a command specific to ufs to grow the filesystem to use the additional space). I think zpools are supposed to grow automatically if SAN LUNs are grown, and this should be a similar situation, anyway. But if you can do that, and want to try it, just be careful. And of course you couldn''t shrink it again, either. This message posted from opensolaris.org
Luckily, my system had a pair of identical, 232GB disks. The 2nd wasn''t yet used, so by juggling mirrors (create 3 mirrors, detach the one to change, etc...), I was able to reconfigure my disks more to my liking - all without a single reboot or loss of data. I now have 2 pools - a 20GB root pool and a 210GB "other" pool, each mirrored on the other disk. If not for the extra disk and the wonderful zfs snapshot/send/receive feature it would have taken a lot more time and aggravation to get it straightened out. -Wyllys This message posted from opensolaris.org
On Wed, 2008-06-11 at 07:40 -0700, Richard L. Hamilton wrote:> > I''m not even trying to stripe it across multiple > > disks, I just want to add another partition (from the > > same physical disk) to the root pool. Perhaps that > > is a distinction without a difference, but my goal is > > to grow my root pool, not stripe it across disks or > > enable raid features (for now). > > > > Currently, my root pool is using c1t0d0s4 and I want > > to add c1t0d0s0 to the pool, but can''t. > > > > -Wyllys > > Right, that''s how it is right now (which the other guy seemed to > be suggesting might change eventually, but nobody knows when > because it''s just not that important compared to other things). > > AFAIK, if you could shrink the partition whose data is after > c1t0d0s4 on the disk, you could grow c1t0d0s4 by that much, > and I _think_ zfs would pick up the growth of the device automatically.This works. ZFS doesn''t notice the size increase until you reboot. I''ve been installing systems over the past year with a slice arrangement intended to make it easy to go to zfs root: s0 with a ZFS pool at start of disk s1 swap s3 UFS boot environment #1 s4 UFS boot environment #2 s7 SVM metadb (if mirrored root) I was happy to discover that this paid off. Once I upgraded a BE to nv_90 and was running on it, it was a matter of: lucreate -p $pool -n nv_90zfs luactivate nv_90zfs init 6 (reboot) ludelete other BE''s format format> partition <delete slices other than s0> <grow s0 to full disk> reboot and you''re all ZFS all the time. - Bill
I had a similar configuration until my recent re-install to snv91. Now I am have just 2 ZFS pools - one for root+boot (big enough to hold multiple BEs and do LiveUpgrades) and another for the rest of my data. -Wyllys This message posted from opensolaris.org