Gil Vidals
2010-Oct-19 10:29 UTC
[zfs-discuss] SSD partitioned into multiple L2ARC read cache
What would the performance impact be of splitting up a 64 GB SSD into four partitions of 16 GB each versus having the entire SSD dedicated to each pool? Scenario A: 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition versus Scenario B: 2 TB Mirror w/ 64 GB read cache SSD 2 TB Mirror w/ 64 GB read cache SSD 2 TB Mirror w/ 64 GB read cache SSD 2 TB Mirror w/ 64 GB read cache SSD -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101019/2493e8e6/attachment.html>
Edward Ned Harvey
2010-Oct-19 13:11 UTC
[zfs-discuss] SSD partitioned into multiple L2ARC read cache
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Gil Vidals > > What would the performance impact be of splitting up a 64 GB SSD into > four partitions of 16 GB each versus having the entire SSD dedicated to > each pool?This is a common question, because people think, "I have 64G, but I can''t possibly use more than 4G, so it''s wasted space." True it''s wasted space, but that''s not what you should be thinking about. When you buy the SSD (or ddrdrive etc) for log device, the reason you''re buying it is *not* for the sake of storage capacity. You''re buying it for the sake of performance enhancement. The constrained resource is *not* usable capacity available. The constrained resource is bandwidth bottleneck to reach the device, and/or IOPS that the device is able to sustain. My advice is to forget about the wasted space, and just use the whole device as a log device for one pool. Otherwise, you''ll be reducing the performance, and if you''re going to do that, you''re defeating the purpose of having the device. Depending on the size of the pool, you might even want more than one. Based on no particular numbers, I would say, one dedicated log device for every 10 spindle disks might be reasonable. Don''t forget to mirror. Or at least think about it and make a conscious decision not to mirror. For the types of work that I do, an un-mirrored log device is an acceptable risk, but for some purposes, it wouldn''t be.
Bob Friesenhahn
2010-Oct-19 15:00 UTC
[zfs-discuss] SSD partitioned into multiple L2ARC read cache
On Tue, 19 Oct 2010, Gil Vidals wrote:> What would the performance impact be of splitting up a 64 GB SSD > into four partitions of 16 GB each versus having the entire SSD > dedicated to each pool?Ignore Edward Ned Harvey''s response because he answered the wrong question. For a L2ARC device, the fill rate (write rate) is carefully limited to a relatively low rate so that it does not constrict reads. If you partition a physical device into four partitions, then you will have increased the actual maximum fill rate by a factor of four, which might impact read performance under heavy load since SSDs are not as good at writing as they are for reading. It is difficult to imagine a performance advantage from partitioning a SSD which is used as a L2ARC. There can only be more overhead associated with needing to manage more logical devices. If there is a hardware failure, it would make understanding the problem a bit more complex. A 1:1 mapping between zfs devices and actual hardware makes things much easier to manage. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Eff Norwood
2010-Oct-19 17:21 UTC
[zfs-discuss] SSD partitioned into multiple L2ARC read cache
We tried this in our environment and found that it didn''t work out. The more partitions we used, the slower it went. We decided just to use the entire SSD as a read cache and it worked fine. Still has the TRIM issue of course until the next version. -- This message posted from opensolaris.org
Gil Vidals
2010-Oct-19 19:31 UTC
[zfs-discuss] SSD partitioned into multiple L2ARC read cache
Based on the answers I received, I will stick to an SSD device fully dedicated to each pool. This means I will have four SSDs and four pools. This seems acceptable to me as it keeps things simpler and if one SSD (L2ARC) fails, the others are still working correctly. Thank you. Gil Vidals On Tue, Oct 19, 2010 at 10:21 AM, Eff Norwood <smith at jsvp.com> wrote:> We tried this in our environment and found that it didn''t work out. The > more partitions we used, the slower it went. We decided just to use the > entire SSD as a read cache and it worked fine. Still has the TRIM issue of > course until the next version. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101019/9e4ca700/attachment.html>
Roy Sigurd Karlsbakk
2010-Oct-19 20:22 UTC
[zfs-discuss] SSD partitioned into multiple L2ARC read cache
----- Original Message ----- Based on the answers I received, I will stick to an SSD device fully dedicated to each pool. This means I will have four SSDs and four pools. This seems acceptable to me as it keeps things simpler and if one SSD (L2ARC) fails, the others are still working correctly. I just woner why you would want a separate pool for each mirror - won''t a striped mirror perform better? Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101019/248a8c11/attachment.html>
Edward Ned Harvey
2010-Oct-20 03:26 UTC
[zfs-discuss] SSD partitioned into multiple L2ARC read cache
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Bob Friesenhahn > > Ignore Edward Ned Harvey''s response because he answered the wrong > question.Indeed. Although, now that I go back and actually read the question correctly, I wonder why nobody said... Even if the fill rate limit wasn''t an issue, even if the bandwidth bottleneck and IOPS bottleneck wasn''t an issue, when it comes to l2arc cache devices, bigger is better. (All else being equal, a bigger cache device is more likely to produce cache hits.) Any way you look at it, having separate whole devices for cache in each pool will obviously be faster than having just a slice of a single shared device shared amongst multiple pools. Would that mean a 4x performance difference? I don''t know, but that''s a good first guess.
Erik Trimble
2010-Oct-21 03:10 UTC
[zfs-discuss] SSD partitioned into multiple L2ARC read cache
All this reminds me: There was some talk awhile ago about allowing multiple pools per ZIL or L2ARC device. Any progress on that front? [yadda, yadda, no forward-looking statements allowed, yadda yadda.] -- Erik Trimble Java System Support Mailstop: usca22-317 Phone: x67195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)