Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming 2010.03 release and now testing with a b134. What is the most appropiate way to accomplish this? The caiman installer allows you to control the size of the partition on the boot disk but it doesn''t allow you (at least I couldn''t figure out how) to control the size of the slices. So you end with slice0 filling the entire partition. Now this leaves you with two options, create a second partition or start a complex process of backing up the root pool, reslicing the first partition, restore the root pool and pray that the system will boot again. I tried the first, knowing that multiple partitions isn''t recommended. I couldn''t get zfs to add the second partition as L2ARC. It simply said that it wasn''t supported. Before I try the second option perhaps somebody can give some directions howto accomplish a shared rpool and l2arc on a (ss)disk. Regards, Frederik -- This message posted from opensolaris.org
F. Wessels wrote:> Hi, > > as Richard Elling wrote earlier: > "For more background, low-cost SSDs intended for the boot market are > perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root > and the rest for an L2ARC. For small form factor machines or machines > with max capacity of 8GB of RAM (a typical home system) this can make a > pleasant improvement over a HDD-only implementation." > > For the upcoming 2010.03 release and now testing with a b134. > What is the most appropiate way to accomplish this? > > The caiman installer allows you to control the size of the partition on the boot disk but it doesn''t allow you (at least I couldn''t figure out how) to control the size of the slices. So you end with slice0 filling the entire partition. > Now this leaves you with two options, create a second partition or start a complex process of backing up the root pool, reslicing the first partition, restore the root pool and pray that the system will boot again. > I tried the first, knowing that multiple partitions isn''t recommended. I couldn''t get zfs to add the second partition as L2ARC. It simply said that it wasn''t supported. > Before I try the second option perhaps somebody can give some directions howto accomplish a shared rpool and l2arc on a (ss)disk. > > Regards, > > Frederik >As I think was possibly mentioned before on this thread, what you probably want to do is either: (a) create a zvol inside the existing rpool, then add the zvol as an L2ARC or (b) create a file in one of the rpool filesystems, and add that as the L2ARC Likely, (a) is the better option. So, go ahead and give the entire boot SSD to the installer to create a rpool of the entire disk, then zvol off a section to be used as the L2ARC. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
On Mon, Mar 29, 2010 at 01:10:22PM -0700, F. Wessels wrote:> The caiman installer allows you to control the size of the partition > on the boot disk but it doesn''t allow you (at least I couldn''t > figure out how) to control the size of the slices. So you end with > slice0 filling the entire partition. > > Now this leaves you with two options, create a second partition or > start a complex process of backing up the root pool, reslicing the > first partition, restore the root pool and pray that the system will > boot again.You can: - install to a partition that''s the size you want rpool - expand the partition to the full disk - leave the s0 slice for rpool alone - make another slice for l2arc in the newly available space Or you can: - use a zvol for the l2arc and forget all this partitioning crap like zfs intended. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100330/17adfda4/attachment.bin>
On Tue, Mar 30, 2010 at 03:13:45PM +1100, Daniel Carosone wrote:> You can: > - install to a partition that''s the size you want rpool > - expand the partition to the full disk- expand the s2 slice to the full disk> - leave the s0 slice for rpool alone > - make another slice for l2arc in the newly available spaceEmacs ate that extra line, i swear. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100330/335fbb47/attachment.bin>
Thanks for the reply. I didn''t get very much further. Yes, ZFS loves raw devices. When I had two devices I wouldn''t be in this mess. I would simply install opensolaris on the first disk and add the second ssd to the data pool with a zpool add mpool cache cxtydz Notice that no slices or partitions were used. But I don''t have space for two devices. So I have to deal with slices and partitions. I did another clean install in 12Gb partition leaving 18Gb free. I tried parted to resize the partition, but it said that resizing (solaris2) partitions wasn''t implemented. I tried fdisk but no luck either. I tried the send and receive, create new partition and slices, restore rpool in slice0, do installgrub but it wouldn''t boot anymore. Can anybody give a summary of commands/steps howto accomplish a bootable rpool and l2arc on a ssd. Preferably for the x86 platform. -- This message posted from opensolaris.org
F. Wessels wrote:> Thanks for the reply. > > I didn''t get very much further. > > Yes, ZFS loves raw devices. When I had two devices I wouldn''t be in this mess. > I would simply install opensolaris on the first disk and add the second ssd to the > data pool with a zpool add mpool cache cxtydz Notice that no slices or partitions > were used. > But I don''t have space for two devices. So I have to deal with slices and partitions. > I did another clean install in 12Gb partition leaving 18Gb free. > I tried parted to resize the partition, but it said that resizing (solaris2) partitions > wasn''t implemented. > I tried fdisk but no luck either. > I tried the send and receive, create new partition and slices, restore rpool in > slice0, do installgrub but it wouldn''t boot anymore. > > Can anybody give a summary of commands/steps howto accomplish a bootable > rpool and l2arc on a ssd. Preferably for the x86 platform. >Look up zvols, as this is what you want to use, NOT partitions (for the many reasons you''ve encountered). In essence, do a normal install, using the ENTIRE disk for your rpool. Then create a zvol in the rpool: # zfs create -V 8GB rpool/zvolname Add this zvol as the cache device (L2arc) for your other pool # zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname the default block size for zvols is 8k, which I''d be interested in having someone test out other sizes, to see which would be best for an L2ARC device. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
On 30/03/2010 10:05, Erik Trimble wrote:> F. Wessels wrote: >> Thanks for the reply. >> >> I didn''t get very much further. >> >> Yes, ZFS loves raw devices. When I had two devices I wouldn''t be in >> this mess. >> I would simply install opensolaris on the first disk and add the >> second ssd to the >> data pool with a zpool add mpool cache cxtydz Notice that no slices or >> partitions >> were used. >> But I don''t have space for two devices. So I have to deal with slices >> and partitions. >> I did another clean install in 12Gb partition leaving 18Gb free. >> I tried parted to resize the partition, but it said that resizing >> (solaris2) partitions >> wasn''t implemented. >> I tried fdisk but no luck either. >> I tried the send and receive, create new partition and slices, restore >> rpool in >> slice0, do installgrub but it wouldn''t boot anymore. >> >> Can anybody give a summary of commands/steps howto accomplish a bootable >> rpool and l2arc on a ssd. Preferably for the x86 platform. > > Look up zvols, as this is what you want to use, NOT partitions (for the > many reasons you''ve encountered).In this case partitions is the only way this will work.> In essence, do a normal install, using the ENTIRE disk for your rpool. > > Then create a zvol in the rpool: > > # zfs create -V 8GB rpool/zvolname > > Add this zvol as the cache device (L2arc) for your other pool > > # zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname >That won''t work L2ARC devices can not be a ZVOL of another pool, they can''t be a file either. An L2ARC device must be a physical device. -- Darren J Moffat
Darren J Moffat wrote:> On 30/03/2010 10:05, Erik Trimble wrote: >> F. Wessels wrote: >>> Thanks for the reply. >>> >>> I didn''t get very much further. >>> >>> Yes, ZFS loves raw devices. When I had two devices I wouldn''t be in >>> this mess. >>> I would simply install opensolaris on the first disk and add the >>> second ssd to the >>> data pool with a zpool add mpool cache cxtydz Notice that no slices or >>> partitions >>> were used. >>> But I don''t have space for two devices. So I have to deal with slices >>> and partitions. >>> I did another clean install in 12Gb partition leaving 18Gb free. >>> I tried parted to resize the partition, but it said that resizing >>> (solaris2) partitions >>> wasn''t implemented. >>> I tried fdisk but no luck either. >>> I tried the send and receive, create new partition and slices, restore >>> rpool in >>> slice0, do installgrub but it wouldn''t boot anymore. >>> >>> Can anybody give a summary of commands/steps howto accomplish a >>> bootable >>> rpool and l2arc on a ssd. Preferably for the x86 platform. >> >> Look up zvols, as this is what you want to use, NOT partitions (for the >> many reasons you''ve encountered). > > In this case partitions is the only way this will work. > >> In essence, do a normal install, using the ENTIRE disk for your rpool. >> >> Then create a zvol in the rpool: >> >> # zfs create -V 8GB rpool/zvolname >> >> Add this zvol as the cache device (L2arc) for your other pool >> >> # zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname >> > > That won''t work L2ARC devices can not be a ZVOL of another pool, they > can''t be a file either. An L2ARC device must be a physical device. >I could have sworn I did this with a zvol awhile ago. Maybe that was for something else... -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
On 30/03/2010 10:13, Erik Trimble wrote:>> Add this zvol as the cache device (L2arc) for your other pool >>> >>> # zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname >>> >> >> That won''t work L2ARC devices can not be a ZVOL of another pool, they >> can''t be a file either. An L2ARC device must be a physical device. >> > > > I could have sworn I did this with a zvol awhile ago. Maybe that was for > something else...The check for the L2ARC device being a block device has always been there. -- Darren J Moffat
Darren J Moffat wrote:> On 30/03/2010 10:13, Erik Trimble wrote: >>> Add this zvol as the cache device (L2arc) for your other pool >>>> >>>> # zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname >>>> >>> >>> That won''t work L2ARC devices can not be a ZVOL of another pool, they >>> can''t be a file either. An L2ARC device must be a physical device. >>> >> >> >> I could have sworn I did this with a zvol awhile ago. Maybe that was for >> something else... > > The check for the L2ARC device being a block device has always been > there. >I just tried a couple of things on one of my test machines, and I think I know where I was mis-remembering it from: you can add a file or zvol as a ZIL device. Frankly, I''m a little confused by this. I would think that you would have consistent behavior between ZIL and L2ARC devices - either they both can be a file/zvol, or neither can. Not the current behavior. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Thank you Erik for the reply. I misunderstood Dan''s suggestion about the zvol in the first place. Now you make the same suggestion also. Doesn''t zfs prefer raw devices? When following this route the zvol used as cache device for tank makes use of the ARC of rpool what doesn''t seem right. Or is there some setting to prevent this. It''s zfs preference for raw devices that I looked for a way to use a slice as cache. Regards, Frederik -- This message posted from opensolaris.org
Thank you Darren. So no zvol''s as L2ARC cache device. That leaves partitions and slices. When I tried to add a second partition, the first contained slices with the root pool, as cache device. Zpool refused, it reported that the device CxTyDzP2 (note P2) wasn''t supported. Perhaps I did something wrong with fdisk. But the partitions were both there. Also in parted. And the other option, use a slice in the first partition as cache device. But I messed up my boot environment with that. And I''m not sure about the correct resizing procedure. Any suggestions? Regards, Frederik. -- This message posted from opensolaris.org
F. Wessels wrote:> Thank you Erik for the reply. > > I misunderstood Dan''s suggestion about the zvol in the first place. Now you make the same suggestion also. Doesn''t zfs prefer raw devices? When following this route the zvol used as cache device for tank makes use of the ARC of rpool what doesn''t seem right. Or is there some setting to prevent this. It''s zfs preference for raw devices that I looked for a way to use a slice as cache. > > Regards, > > Frederik >As Darren pointed out, you can''t use anything but a block device for the L2ARC device. So my suggestion doesn''t work. You can use a file or zvol as a ZIL device, but not an L2ARC device. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
George Puchalski
2010-Mar-30 13:18 UTC
[zfs-discuss] sharing a ssd between rpool and l2arc
http://fixunix.com/solaris-rss/570361-make-most-your-ssd-zfs.html I think this is what you are looking for. GParted FTW. Cheers, _GP_ -- This message posted from opensolaris.org
Just clarifying Darren''s comment - we got bitten by this pretty badly so I figure it''s worth saying again here. ZFS will *allow* you to use a ZVOL of one pool as a ZDEV in another pool, but it results in race conditions and an unstable system. (At least on Solaris 10 update 8). We tried to use a ZVOL from rpool (on fast 15k rpm drives) as a cache device for another pool (on slower 7.2k rpm drives). It worked great up until it hit the race condition and hung the system. It would have been nice if zfs had issued a warning, or at least if this fact was better documented. Scott Duckworth, Systems Programmer II Clemson University School of Computing On Tue, Mar 30, 2010 at 5:09 AM, Darren J Moffat <darrenm at opensolaris.org>wrote:> On 30/03/2010 10:05, Erik Trimble wrote: > >> F. Wessels wrote: >> >>> Thanks for the reply. >>> >>> I didn''t get very much further. >>> >>> Yes, ZFS loves raw devices. When I had two devices I wouldn''t be in >>> this mess. >>> I would simply install opensolaris on the first disk and add the >>> second ssd to the >>> data pool with a zpool add mpool cache cxtydz Notice that no slices or >>> partitions >>> were used. >>> But I don''t have space for two devices. So I have to deal with slices >>> and partitions. >>> I did another clean install in 12Gb partition leaving 18Gb free. >>> I tried parted to resize the partition, but it said that resizing >>> (solaris2) partitions >>> wasn''t implemented. >>> I tried fdisk but no luck either. >>> I tried the send and receive, create new partition and slices, restore >>> rpool in >>> slice0, do installgrub but it wouldn''t boot anymore. >>> >>> Can anybody give a summary of commands/steps howto accomplish a bootable >>> rpool and l2arc on a ssd. Preferably for the x86 platform. >>> >> >> Look up zvols, as this is what you want to use, NOT partitions (for the >> many reasons you''ve encountered). >> > > In this case partitions is the only way this will work. > > > In essence, do a normal install, using the ENTIRE disk for your rpool. >> >> Then create a zvol in the rpool: >> >> # zfs create -V 8GB rpool/zvolname >> >> Add this zvol as the cache device (L2arc) for your other pool >> >> # zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname >> >> > That won''t work L2ARC devices can not be a ZVOL of another pool, they can''t > be a file either. An L2ARC device must be a physical device. > > -- > Darren J Moffat > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100330/9564bd5f/attachment.html>
> you can''t use anything but a block device for the L2ARC device.sure you can... http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/039228.html it even lives through a reboot (rpool is mounted before other pools) zpool create -f test c9t3d0s0 c9t4d0s0 zfs create -V 3G rpool/cache zpool add test cache /dev/zvol/dsk/rpool/cache reboot if your asking for a L2ARC on rpool, well, yea, its not mounted soon enough, but the point is to put rpool, swap, and L2ARC for your storage pool all on a single SSD.. Rob
On Mar 29, 2010, at 1:10 PM, F. Wessels wrote:> Hi, > > as Richard Elling wrote earlier: > "For more background, low-cost SSDs intended for the boot market are > perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root > and the rest for an L2ARC. For small form factor machines or machines > with max capacity of 8GB of RAM (a typical home system) this can make a > pleasant improvement over a HDD-only implementation." > > For the upcoming 2010.03 release and now testing with a b134. > What is the most appropiate way to accomplish this?The most appropriate (supportable by Oracle) is to use the automated installer. An example of the manifest is: http://dlc.sun.com/osol/docs/content/dev/AIinstall/customai.html#ievtoc> The caiman installer allows you to control the size of the partition on the boot disk but it doesn''t allow you (at least I couldn''t figure out how) to control the size of the slices. So you end with slice0 filling the entire partition. > Now this leaves you with two options, create a second partition or start a complex process of backing up the root pool, reslicing the first partition, restore the root pool and pray that the system will boot again. > I tried the first, knowing that multiple partitions isn''t recommended. I couldn''t get zfs to add the second partition as L2ARC. It simply said that it wasn''t supported. > Before I try the second option perhaps somebody can give some directions howto accomplish a shared rpool and l2arc on a (ss)disk.There are perhaps a half dozen ways to do this. As others have mentioned, using fdisk partitions can be done and is particularly easy when using the text-based installer. However, with that option you need a smarter partition editor than fdisk (eg. gparted) And, of course, you can fake out the installer altogether... or even change the source code... -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
>>>>> "et" == Erik Trimble <erik.trimble at oracle.com> writes:et> Add this zvol as the cache device (L2arc) for your other pool doesn''t bug 6915521 mean this arrangement puts you at risk of deadlock? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100330/e28430c8/attachment.bin>
Hi all, yes it works with the partitions. I think that I made a typo during the initial testing off adding a partition as cache, probably swapped the 0 for an o. Tested with a b134 gui and text installer on the x86 platform. So here it goes: Install opensolaris into a partition and leave some space for the L2ARC. This will remove all partitions from the disk! After the installation login. # fdisk /dev/rdsk/[boot disk]p0 select option 1 create a partition select option 1 for a SOLARIS2 partition type specify a size answer no, do not make this partition active when satified with the result write your changes by choosing 6 and exit fdisk finally add your cache "device" to your data pool # zpool add mpool cache /dev/rdsk/cXtYdZp2 that''s it Some notes: -you CAN remove the cache device from the pool. -you CAN import the pool with a missing cache device. (Remember you CANN''T import a pool with a missing slog!) Open questions: -what happens when one cache device fails in case of several striped cache devices? Will this disable the entire L2ARC or will it continue to function minus the faulty device? -The alignment mess. I know that the (intel) ssd''s are sensitive to misaligment. Fdisk only allows you to enter cylinders NOT lba addresses. You probably can workaround that with parted. -Does anybody know more about the recently announced "flash aware" sd driver? This was on storage-discuss a couple of days ago. Does anybody have any tips to squeeze the most out of the ssd''s? Thank you all for your time and interest. -- This message posted from opensolaris.org
On 30/03/2010 21:53, Miles Nordin wrote:>>>>>> "et" == Erik Trimble<erik.trimble at oracle.com> writes: > > et> Add this zvol as the cache device (L2arc) for your other pool > > doesn''t bug 6915521 mean this arrangement puts you at risk of deadlock?Yes that risk is there. I would highly recommend against using a ZVOL as a cache device for another pool because of this bug and also because it may actually hurt performance instead of helping it. It is actually pretty hard to work out exactly how a ZVOL will act as an L2ARC cache device for another pool. -- Darren J Moffat