Hi all, I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of their exaggerated claims of sustained performance. I was thinking about getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive, one of the STEC products, etc. Of course the key question i''m trying to answer is: is the price premium worth it? --- What is the (average/min/max) latency of my current ZIL device? --- How how will improving ZIL latency improve performance of my pool that is used as a NFS share to ESXi hosts which forces sync writes only (i.e will it be noticeable in an end-to-end context)? I''ve been looking around and haven''t found a succinct way of measuring the latency of an individual device when used as a ZIL in a zpool. I am experienced using iometer to measure individual devices, but as always it isn''t easy to decompose that benchmark to determine where the bottlenecks occur. It is possible to run multiple tests with multiple hardware configurations and compare iometer results, but i''m trying to avoid having to buy the ZIL accelerator just to "see" what the impact would be. I''d hate to buy an expensive device just to find out that NFS is the main latency bottleneck all along and the ZIL is inconsequential. The only thing I can think of is to create a zpool consisting of a single OCZ Vertex 4 SSD, sharing it as NFS and running iometer on a VM and see how it performs in a "real world" use-case... but that doesn''t necessarily isolate how well it performs as a ZIL. Another thing I found is Brendan Gregg''s latencytop.d dtrace script ( mail-archive.com/dtrace-discuss at opensolaris.org/msg00934.html), but the examples I''ve seen don''t seem to isolate an individual disk. Are there any other useful scripts, commands or resources I should be aware of? Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: <mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121001/ddcd9e35/attachment.html>
Dan Swartzendruber
2012-Oct-01 13:23 UTC
[zfs-discuss] Best way to measure performance of ZIL
Matt, how about running the same disk benchmark(s), with sync=disabled vs sync=enabled and the ZIL accelerator in place? _____ From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Matt Van Mater Sent: Monday, October 01, 2012 9:19 AM To: zfs-discuss at opensolaris.org Subject: [zfs-discuss] Best way to measure performance of ZIL Hi all, I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of their exaggerated claims of sustained performance. I was thinking about getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive, one of the STEC products, etc. Of course the key question i''m trying to answer is: is the price premium worth it? --- What is the (average/min/max) latency of my current ZIL device? --- How how will improving ZIL latency improve performance of my pool that is used as a NFS share to ESXi hosts which forces sync writes only (i.e will it be noticeable in an end-to-end context)? I''ve been looking around and haven''t found a succinct way of measuring the latency of an individual device when used as a ZIL in a zpool. I am experienced using iometer to measure individual devices, but as always it isn''t easy to decompose that benchmark to determine where the bottlenecks occur. It is possible to run multiple tests with multiple hardware configurations and compare iometer results, but i''m trying to avoid having to buy the ZIL accelerator just to "see" what the impact would be. I''d hate to buy an expensive device just to find out that NFS is the main latency bottleneck all along and the ZIL is inconsequential. The only thing I can think of is to create a zpool consisting of a single OCZ Vertex 4 SSD, sharing it as NFS and running iometer on a VM and see how it performs in a "real world" use-case... but that doesn''t necessarily isolate how well it performs as a ZIL. Another thing I found is Brendan Gregg''s latencytop.d dtrace script (mail-archive.com/dtrace-discuss at opensolaris.org/msg00934.html), but the examples I''ve seen don''t seem to isolate an individual disk. Are there any other useful scripts, commands or resources I should be aware of? Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: <mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121001/0a60c580/attachment.html>
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
2012-Oct-01 14:09 UTC
[zfs-discuss] Best way to measure performance of ZIL
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > > --- How how will improving ZIL latency improve performance of my pool that > is used as a NFS share to ESXi hosts which forces sync writes only (i.e will it be > noticeable in an end-to-end context)?Just perform a bunch of writes, time it. Then set sync=disabled, perform the same set of writes, time it. Then enable sync, add a ZIL device, time it. The third option will be somewhere in between the first two. Ideally, with a dedicated ZIL device, you come very close to the performance with sync disabled. (But that''s not realistic.) The only question is how to create a bunch of sync IO, which emulates your actual usage. If you were running a DB, then you would hammer your DB. Since you''re running a NFS server, hopefully you can make a bunch of NFS clients hammer the server.
On 10/01/2012 09:09 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote:> Just perform a bunch of writes, time it. Then set sync=disabled, > perform the same set of writes, time it. Then enable sync, add a ZIL > device, time it. The third option will be somewhere in between the > first two.To "perform a bunch of writes", vdbench is a very useful tool. blogs.oracle.com/henk/entry/vdbench_a_disk_and_tape sourceforge.net/projects/vdbench/files/vdbench503beta