Ray Van Dolson
2010-Jul-02 06:18 UTC
[zfs-discuss] Using a zvol from your rpool as zil for another zpool
We have a server with a couple X-25E''s and a bunch of larger SATA disks. To save space, we want to install Solaris 10 (our install is only about 1.4GB) to the X-25E''s and use the remaining space on the SSD''s for ZIL attached to a zpool created from the SATA drives. Currently we do this by installing the OS using SVM+UFS (to mirror the OS between the two SSD''s) and then using the remaining space on a slice as ZIL for the larger SATA-based zpool. However, SVM+UFS is more annoying to work with as far as LiveUpgrade is concerned. We''d love to use a ZFS root, but that requires that the entire SSD be dedicated as an rpool leaving no space for ZIL. Or does it? It appears that we could do a: # zfs create -V 24G rpool/zil On our rpool and then: # zpool add satapool log /dev/zvol/dsk/rpool/zil (I realize 24G is probably far more than a ZIL device will ever need) As rpool is mirrored, this would also take care of redundancy for the ZIL as well. This lets us have a nifty ZFS rpool for simplified LiveUpgrades and a fast SSD-based ZIL for our SATA zpool as well... What are the downsides to doing this? Will there be a noticeable performance hit? I know I''ve seen this discussed here before, but wasn''t able to come up with the right search terms... Thanks, Ray
Ray Van Dolson
2010-Jul-02 07:05 UTC
[zfs-discuss] Using a zvol from your rpool as zil for another zpool
> However, SVM+UFS is more annoying to work with as far as LiveUpgrade is > concerned. We''d love to use a ZFS root, but that requires that the > entire SSD be dedicated as an rpool leaving no space for ZIL. Or does > it? > > It appears that we could do a: > > # zfs create -V 24G rpool/zil > > On our rpool and then: > > # zpool add satapool log /dev/zvol/dsk/rpool/zil > > (I realize 24G is probably far more than a ZIL device will ever need) > > As rpool is mirrored, this would also take care of redundancy for the > ZIL as well. > > This lets us have a nifty ZFS rpool for simplified LiveUpgrades and a > fast SSD-based ZIL for our SATA zpool as well... > > What are the downsides to doing this? Will there be a noticeable > performance hit? > > I know I''ve seen this discussed here before, but wasn''t able to come up > with the right search terms...Well, after doing a little better on my searches, it sounds like -- at least for cache/L2ARC on zvol''s, some race conditions can pop up and this isn''t necessarily the most robust or tested configuration. Doesn''t sound like something I''d want to do in production. Perhaps the better option is to have multiple Solaris FDISK partitions set up. This way I could still install my rpool to the first partition and use the remaining partition for my ZIL for the SATA zpool. This obviously would only work on x86 systems. Would multiple FDISK partitions be the most robust way to implement this? Ray
Ben Taylor
2010-Jul-02 10:40 UTC
[zfs-discuss] Using a zvol from your rpool as zil for another zpool
> We have a server with a couple X-25E''s and a bunch of > larger SATA > disks. > > To save space, we want to install Solaris 10 (our > install is only about > 1.4GB) to the X-25E''s and use the remaining space on > the SSD''s for ZIL > attached to a zpool created from the SATA drives. > > Currently we do this by installing the OS using > SVM+UFS (to mirror the > OS between the two SSD''s) and then using the > remaining space on a slice > as ZIL for the larger SATA-based zpool. > > However, SVM+UFS is more annoying to work with as far > as LiveUpgrade is > concerned. We''d love to use a ZFS root, but that > requires that the > entire SSD be dedicated as an rpool leaving no space > for ZIL. Or does > it?For every system I have ever done zfs root on, it''s always been a slice on a disk. As an example, we have an x4500 with 1TB disks. For that root config, we are planning on something like 150G on s0, and the rest on S3. s0 for the rpool, and s3 for the qpool. We didn''t want to have to deal with issues around flashing a huge volume, as we found out with our other x4500 with 500GB disks. AFAIK, it''s only non-rpool disks that use the "whole disk", and I doubt there''s some sort of specific feature with an SSD, but I could be wrong. I like your idea of a reasonably sized root rpool and the rest used for the ZIL. But if you''re going to do LU, you should probably take a good look at how much space you need for the clones and snapshots on the rpool Ben -- This message posted from opensolaris.org
Ray Van Dolson
2010-Jul-02 14:30 UTC
[zfs-discuss] Using a zvol from your rpool as zil for another zpool
On Fri, Jul 02, 2010 at 03:40:26AM -0700, Ben Taylor wrote:> > We have a server with a couple X-25E''s and a bunch of > > larger SATA > > disks. > > > > To save space, we want to install Solaris 10 (our > > install is only about > > 1.4GB) to the X-25E''s and use the remaining space on > > the SSD''s for ZIL > > attached to a zpool created from the SATA drives. > > > > Currently we do this by installing the OS using > > SVM+UFS (to mirror the > > OS between the two SSD''s) and then using the > > remaining space on a slice > > as ZIL for the larger SATA-based zpool. > > > > However, SVM+UFS is more annoying to work with as far > > as LiveUpgrade is > > concerned. We''d love to use a ZFS root, but that > > requires that the > > entire SSD be dedicated as an rpool leaving no space > > for ZIL. Or does > > it? > > For every system I have ever done zfs root on, it''s always > been a slice on a disk. As an example, we have an x4500 > with 1TB disks. For that root config, we are planning on > something like 150G on s0, and the rest on S3. s0 for > the rpool, and s3 for the qpool. We didn''t want to have > to deal with issues around flashing a huge volume, as > we found out with our other x4500 with 500GB disks. > > AFAIK, it''s only non-rpool disks that use the "whole disk", > and I doubt there''s some sort of specific feature with > an SSD, but I could be wrong. > > I like your idea of a reasonably sized root rpool and the > rest used for the ZIL. But if you''re going to do LU, > you should probably take a good look at how much space > you need for the clones and snapshots on the rpoolInteresting. For some reason, I coulda sworn that Sol 10 U8 installer required you to use an entire disk for a ZFS rpool, so using only part of the disk on a slice and leaving space for other uses wasn''t an option. I''ll revisit this though. Thanks for the reply. Ray
Cindy Swearingen
2010-Jul-02 15:53 UTC
[zfs-discuss] Using a zvol from your rpool as zil for another zpool
Hi Ray, In general, using components from one pool for another pool is discouraged because this configuration can cause deadlocks. Using this configuration for ZIL usage would probably work fine (with a performance hit because of the volume) until something unforeseen goes wrong. This config is untested by us. I would recommend saving your sanity over saving disk space and keep pools'' components separate. Thanks, Cindy On 07/02/10 00:18, Ray Van Dolson wrote:> We have a server with a couple X-25E''s and a bunch of larger SATA > disks. > > To save space, we want to install Solaris 10 (our install is only about > 1.4GB) to the X-25E''s and use the remaining space on the SSD''s for ZIL > attached to a zpool created from the SATA drives. > > Currently we do this by installing the OS using SVM+UFS (to mirror the > OS between the two SSD''s) and then using the remaining space on a slice > as ZIL for the larger SATA-based zpool. > > However, SVM+UFS is more annoying to work with as far as LiveUpgrade is > concerned. We''d love to use a ZFS root, but that requires that the > entire SSD be dedicated as an rpool leaving no space for ZIL. Or does > it? > > It appears that we could do a: > > # zfs create -V 24G rpool/zil > > On our rpool and then: > > # zpool add satapool log /dev/zvol/dsk/rpool/zil > > (I realize 24G is probably far more than a ZIL device will ever need) > > As rpool is mirrored, this would also take care of redundancy for the > ZIL as well. > > This lets us have a nifty ZFS rpool for simplified LiveUpgrades and a > fast SSD-based ZIL for our SATA zpool as well... > > What are the downsides to doing this? Will there be a noticeable > performance hit? > > I know I''ve seen this discussed here before, but wasn''t able to come up > with the right search terms... > > Thanks, > Ray > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ray Van Dolson
2010-Jul-02 19:30 UTC
[zfs-discuss] Using a zvol from your rpool as zil for another zpool
On Fri, Jul 02, 2010 at 08:18:48AM -0700, Erik Ableson wrote:> Le 2 juil. 2010 ? 16:30, Ray Van Dolson <rvandolson at esri.com> a ?crit : > > > On Fri, Jul 02, 2010 at 03:40:26AM -0700, Ben Taylor wrote: > >>> We have a server with a couple X-25E''s and a bunch of larger SATA > >>> disks. > >>> > >>> To save space, we want to install Solaris 10 (our install is only > >>> about 1.4GB) to the X-25E''s and use the remaining space on the > >>> SSD''s for ZIL attached to a zpool created from the SATA drives. > >>> > >>> Currently we do this by installing the OS using SVM+UFS (to > >>> mirror the OS between the two SSD''s) and then using the remaining > >>> space on a slice as ZIL for the larger SATA-based zpool. > >>> > >>> However, SVM+UFS is more annoying to work with as far as > >>> LiveUpgrade is concerned. We''d love to use a ZFS root, but that > >>> requires that the entire SSD be dedicated as an rpool leaving no > >>> space for ZIL. Or does it? > >> > >> For every system I have ever done zfs root on, it''s always been a > >> slice on a disk. As an example, we have an x4500 with 1TB disks. > >> For that root config, we are planning on something like 150G on > >> s0, and the rest on S3. s0 for the rpool, and s3 for the qpool. > >> We didn''t want to have to deal with issues around flashing a huge > >> volume, as we found out with our other x4500 with 500GB disks. > >> > >> AFAIK, it''s only non-rpool disks that use the "whole disk", and I > >> doubt there''s some sort of specific feature with an SSD, but I > >> could be wrong. > >> > >> I like your idea of a reasonably sized root rpool and the rest > >> used for the ZIL. But if you''re going to do LU, you should > >> probably take a good look at how much space you need for the > >> clones and snapshots on the rpool > > > > Interesting. For some reason, I coulda sworn that Sol 10 U8 > > installer required you to use an entire disk for a ZFS rpool, so > > using only part of the disk on a slice and leaving space for other > > uses wasn''t an option. > > > > I''ll revisit this though. > > > > It certainly works under OpenSolaris, but you might want to look into > manually partitioning the drive to ensure that it''s properly aligned > on the 4k boundaries. Last time I did that, it showed me a tiny space > before the manually created partition. > > Cheers, > > Erik > >Well, everything worked fine. ZFS rpool on s0 and ZIL for another pool on s3. Unfortunately, I didn''t end up doing the 4K block alignment. Doesn''t look like the "fdisk" keyword in JumpStart lets you specify this sort of thing, but I probably could have pre-partitioned the disk from the shell before running my JumpStart. Lessons learned. Thanks all, Ray
James Dickens
2010-Jul-03 01:28 UTC
[zfs-discuss] Using a zvol from your rpool as zil for another zpool
On Fri, Jul 2, 2010 at 1:18 AM, Ray Van Dolson <rvandolson at esri.com> wrote:> We have a server with a couple X-25E''s and a bunch of larger SATA > disks. > > To save space, we want to install Solaris 10 (our install is only about > 1.4GB) to the X-25E''s and use the remaining space on the SSD''s for ZIL > attached to a zpool created from the SATA drives. > > Currently we do this by installing the OS using SVM+UFS (to mirror the > OS between the two SSD''s) and then using the remaining space on a slice > as ZIL for the larger SATA-based zpool. > > However, SVM+UFS is more annoying to work with as far as LiveUpgrade is > concerned. We''d love to use a ZFS root, but that requires that the > entire SSD be dedicated as an rpool leaving no space for ZIL. Or does > it? > > It appears that we could do a: > > # zfs create -V 24G rpool/zil > >I would avoid it for now, I have been bitten by a volume becoming unavailable after multiple storms rolling through my area having the system crash multiple times, i had used volume in the root pool for a zil on another pool, while i was unable able to get these two pools back up after hours of work, I still have no access to a raidz pool that has multiple volumes used for l2arc and log devices, one of the volumes is unaccessible it cant even be snapshotted to recover the data. So the pool of 1.5TB of data sits unimportance until a way of disabling log and l2arc devices on pool import is implemented.> On our rpool and then: > > # zpool add satapool log /dev/zvol/dsk/rpool/zil > > (I realize 24G is probably far more than a ZIL device will ever need) > > As rpool is mirrored, this would also take care of redundancy for the > ZIL as well. > > This lets us have a nifty ZFS rpool for simplified LiveUpgrades and a > fast SSD-based ZIL for our SATA zpool as well... > > What are the downsides to doing this? Will there be a noticeable > performance hit? > > I know I''ve seen this discussed here before, but wasn''t able to come up > with the right search terms... > > Thanks, > Ray > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100702/4b97cb5d/attachment.html>