Hi I''m planning on setting up two RaidZ2 volumes in different pools for added flexibility in removing / resizing (from what I understand if they were in the same pool I can''t remove them at all). I also have got an SSD drive that I was going to use as Cache (L2ARC). How do I set this up to have two L2ARCs off one SSD (to service each pool). Do I need to create two slices (50% of the SSD disk space) and assign one to each pool? Also I''m not expecting a lot of writes (primarily a file server) so I didn''t think a ZIL would be a worthwhile investment. Any advice appreciated -- This message posted from opensolaris.org
On Fri, March 26, 2010 17:26, Muhammed Syyid wrote:> Hi > I''m planning on setting up two RaidZ2 volumes in different pools for added > flexibility in removing / resizing (from what I understand if they were in > the same pool I can''t remove them at all).What do you mean "remove"? You cannot remove a vdev from a pool. You can however destroy the entire pool, thus essentially removing the vdev. You CAN replace the drives in a vdev, one at a time, with larger drives, and when you are done the extra space will be available to the pool, so for resizing purposes you can essentially replace a vdev, though not remove it or alter the number of drives or the type. -- David Dyer-Bennet, dd-b at dd-b.net; dd-b.net Snapshots: dd-b.net/dd-b/SnapshotAlbum/data Photos: dd-b.net/photography/gallery Dragaera: dragaera.info
Which is why I was looking to setup 1x8 raidz2 as pool1 and 1x8 raidz2 as pool2 instead of as two vdevs under 1 pool. That way I can have ''some'' flexibility where I could take down pool1 or pool2 without affecting the other. The issue I had was how do I set up an L2ARC for 2 pools (pool1/pool2) using 1 SSD drive -- This message posted from opensolaris.org
Muhammed Syyid wrote:> Which is why I was looking to setup > 1x8 raidz2 as pool1 > and > 1x8 raidz2 as pool2 > > instead of as two vdevs under 1 pool. That way I can have ''some'' flexibility where I could take down pool1 or pool2 without affecting the other. > > The issue I had was how do I set up an L2ARC for 2 pools (pool1/pool2) using 1 SSD drive >Your original idea was correct - simply create 2 slices on the SSD, and assign one slice to each pool as an L2ARC device. Obviously, they don''t have to be the same size. You can''t share a device (either as ZIL or L2ARC) between multiple pools. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On Sat, Mar 27, 2010 at 01:03:39AM -0700, Erik Trimble wrote:> You can''t share a device (either as ZIL or L2ARC) between multiple pools.Discussion here some weeks ago reached suggested that an L2ARC device was used for all ARC evictions, regardless of the pool. I''d very much like an authoritative statement (and corresponding documentation updates) if this was correct. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100327/3257697e/attachment.bin>
On Sat, 27 Mar 2010, Daniel Carosone wrote:> On Sat, Mar 27, 2010 at 01:03:39AM -0700, Erik Trimble wrote: > >> You can''t share a device (either as ZIL or L2ARC) between multiple pools. > > Discussion here some weeks ago reached suggested that an L2ARC device > was used for all ARC evictions, regardless of the pool.That is my recollection as well. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, simplesystems.org/users/bfriesen GraphicsMagick Maintainer, GraphicsMagick.org
On Mar 27, 2010, at 2:41 AM, Daniel Carosone wrote:> On Sat, Mar 27, 2010 at 01:03:39AM -0700, Erik Trimble wrote: > >> You can''t share a device (either as ZIL or L2ARC) between multiple pools. > > Discussion here some weeks ago reached suggested that an L2ARC device > was used for all ARC evictions, regardless of the pool. > > I''d very much like an authoritative statement (and corresponding > documentation updates) if this was correct.I''m responsible for propagating this false rumor. It is the result of some testing in which my analysis was flawed. The authoritative response is, as usual, in the source. l2arc_write_eligible() checks the spa (read: zpool) to see if the buffer belongs to it. If not, the buffer is not sent to the l2arc. See the source on or near src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#3729 I still think this is a useful thing and believe it can be implemented relatively easily. I''m thinking along the lines of having a property for the pool to share the cache. Besides handling the property itself, it looks like there are only a few (perhaps as few as 5) lines of code needed to change the eligibility and handle flushing. However, this idea may collide with the persistent l2arc project. -- richard-who-would-prefer-to-eat-his-eggs-rather-than-wear-them ZFS storage and performance consulting at RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 nexenta-vegas.eventbrite.com
> >> You can''t share a device (either as ZIL or L2ARC) between multiple > pools. > > > > Discussion here some weeks ago reached suggested that an L2ARC device > > was used for all ARC evictions, regardless of the pool. > > > > I''d very much like an authoritative statement (and corresponding > > documentation updates) if this was correct. > > I''m responsible for propagating this false rumor. It is the result of > some > testing in which my analysis was flawed. The authoritative response > is, > as usual, in the source.Can''t you slice the SSD in two, and then give each slice to the two zpools?
On Mar 28, 2010, at 6:57 AM, Edward Ned Harvey wrote:>>>> You can''t share a device (either as ZIL or L2ARC) between multiple >> pools. >>> >>> Discussion here some weeks ago reached suggested that an L2ARC device >>> was used for all ARC evictions, regardless of the pool. >>> >>> I''d very much like an authoritative statement (and corresponding >>> documentation updates) if this was correct. >> >> I''m responsible for propagating this false rumor. It is the result of >> some >> testing in which my analysis was flawed. The authoritative response >> is, >> as usual, in the source. > > Can''t you slice the SSD in two, and then give each slice to the two zpools?This is exactly what I do :-) For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation. This morning Amazon says the X-25V is only $125 -- almost pocket change. -- richard ZFS storage and performance consulting at RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 nexenta-vegas.eventbrite.com
>> Can''t you slice the SSD in two, and then give each slice to the two zpools? > This is exactly what I do ... use 15-20 GB for root and the rest for an L2ARC.I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC so your not limited by the hard partitioning? Rob
> I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC > so your not limited by the hard partitioning?it lives through a reboot.. zpool create -f test c9t3d0s0 c9t4d0s0 zfs create -V 3G rpool/cache zpool add test cache /dev/zvol/dsk/rpool/cache reboot zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c9t1d0s0 ONLINE 0 0 0 c9t2d0s0 ONLINE 0 0 0 errors: No known data errors pool: test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 c9t3d0s0 ONLINE 0 0 0 c9t4d0s0 ONLINE 0 0 0 cache /dev/zvol/dsk/rpool/cache ONLINE 0 0 0 errors: No known data errors