Peter Radig
2010-Feb-04 23:43 UTC
[zfs-discuss] Impact of an enterprise class SSD on ZIL performance
I was interested in the impact the type of an SSD has on the performance of the ZIL. So I did some benchmarking and just want to share the results. My test case is simply untarring the latest ON source (528 MB, 53k files) on an Linux system that has a ZFS file system mounted via NFS over gigabit ethernet. I got the following results: - locally on the Solaris box: 30 sec - remotely with no dedicated ZIL device: 36 min 37 sec (factor 73 compared to local) - remotely with ZIL disabled: 1 min 54 sec (factor 3.8 compared to local) - remotely with a OCZ VERTEX SATA II 120 GB as ZIL device: 14 min 40 sec (factor 29.3 compared to local) - remotely with an Intel X25-E 32 GB as ZIL device: 3 min 11 sec (factor 6.4 compared to local) So it really makes a difference what type of SSD you use for your ZIL device. I was expecting a good performance from the X25-E, but was really suprised that it is that good (only 1.7 times slower than it takes with ZIL completely disabled). So I will use the X25-E as ZIL device on my box and will not consider disabling ZIL at all to improve NFS performance. -- Peter -- This message posted from opensolaris.org
Marc Nicholas
2010-Feb-05 00:01 UTC
[zfs-discuss] Impact of an enterprise class SSD on ZIL performance
Very interesting stats -- thanks for taking the time and trouble to share them! One thing I found interesting is that the Gen 2 X25-M has higher write IOPS than the X25-E according to Intel''s documentation (6,600 IOPS for 4K writes versus 3,300 IOPS for 4K writes on the "E"). I wonder if it''d perform better as a ZIL? (The write latency on both drives is the same). -marc On Thu, Feb 4, 2010 at 6:43 PM, Peter Radig <peter at radig.de> wrote:> I was interested in the impact the type of an SSD has on the performance of > the ZIL. So I did some benchmarking and just want to share the results. > > My test case is simply untarring the latest ON source (528 MB, 53k files) > on an Linux system that has a ZFS file system mounted via NFS over gigabit > ethernet. > > I got the following results: > - locally on the Solaris box: 30 sec > - remotely with no dedicated ZIL device: 36 min 37 sec (factor 73 compared > to local) > - remotely with ZIL disabled: 1 min 54 sec (factor 3.8 compared to local) > - remotely with a OCZ VERTEX SATA II 120 GB as ZIL device: 14 min 40 sec > (factor 29.3 compared to local) > - remotely with an Intel X25-E 32 GB as ZIL device: 3 min 11 sec (factor > 6.4 compared to local) > > So it really makes a difference what type of SSD you use for your ZIL > device. I was expecting a good performance from the X25-E, but was really > suprised that it is that good (only 1.7 times slower than it takes with ZIL > completely disabled). So I will use the X25-E as ZIL device on my box and > will not consider disabling ZIL at all to improve NFS performance. > > -- Peter > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100204/b7b9827b/attachment.html>
Andrew Gabriel
2010-Feb-05 00:10 UTC
[zfs-discuss] Impact of an enterprise class SSD on ZIL performance
Peter Radig wrote:> I was interested in the impact the type of an SSD has on the performance of the ZIL. So I did some benchmarking and just want to share the results. > > My test case is simply untarring the latest ON source (528 MB, 53k files) on an Linux system that has a ZFS file system mounted via NFS over gigabit ethernet. > > I got the following results: > > - remotely with no dedicated ZIL device: 36 min 37 sec (factor 73 compared to local) > > - remotely with an Intel X25-E 32 GB as ZIL device: 3 min 11 sec (factor 6.4 compared to local) >That''s about the same ratio I get when I demonstrate this on the SSD/Flash/Turbocharge Discovery Days I run the UK from time to time (the name changes over time;-). -- Andrew Gabriel
Bob Friesenhahn
2010-Feb-05 03:18 UTC
[zfs-discuss] Impact of an enterprise class SSD on ZIL performance
On Thu, 4 Feb 2010, Marc Nicholas wrote:> Very interesting stats -- thanks for taking the time and trouble to share them! > > One thing I found interesting is that the Gen 2 X25-M has higher write IOPS than the > X25-E according to Intel''s documentation (6,600 IOPS for 4K writes versus 3,300 IOPS for > 4K writes on the "E"). I wonder if it''d perform better as a ZIL? (The write latency on > both drives is the same).The write IOPS between the X25-M and the X25-E are different since with the X25-M, much more of your data gets completely lost. Most of us prefer not to lose our data. The X25-M is about as valuable as a paper weight for use as a zfs slog. Toilet paper would be a step up. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Marc Nicholas
2010-Feb-05 03:27 UTC
[zfs-discuss] Impact of an enterprise class SSD on ZIL performance
On Thu, Feb 4, 2010 at 10:18 PM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> On Thu, 4 Feb 2010, Marc Nicholas wrote: > > Very interesting stats -- thanks for taking the time and trouble to share >> them! >> >> One thing I found interesting is that the Gen 2 X25-M has higher write >> IOPS than the >> X25-E according to Intel''s documentation (6,600 IOPS for 4K writes versus >> 3,300 IOPS for >> 4K writes on the "E"). I wonder if it''d perform better as a ZIL? (The >> write latency on >> both drives is the same). >> > > The write IOPS between the X25-M and the X25-E are different since with the > X25-M, much more of your data gets completely lost. Most of us prefer not > to lose our data. > > Would you like to qualify your statement further?While I understand the difference between MLC and SLC parts, I''m pretty sure Intel didn''t design the M version to make "data get completely lost". ;) -marc -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100204/39df9b02/attachment.html>
Bob Friesenhahn
2010-Feb-05 03:35 UTC
[zfs-discuss] Impact of an enterprise class SSD on ZIL performance
On Thu, 4 Feb 2010, Marc Nicholas wrote:> > The write IOPS between the X25-M and the X25-E are different since with the X25-M, much > more of your data gets completely lost. ?Most of us prefer not to lose our data. > > Would you like to qualify your statement further?Google is your friend. And check earlier on this list/forum as well.> While I understand the difference between MLC and SLC parts, I''m pretty sure Intel didn''t > design the M version to make "data get completely lost". ;)It loses the most recently written data, even after a cache sync request. A number of people have verified this for themselves and posted results. Even the X25-E has been shown to lose some transactions. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Marc Nicholas
2010-Feb-05 03:44 UTC
[zfs-discuss] Impact of an enterprise class SSD on ZIL performance
On Thu, Feb 4, 2010 at 10:35 PM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> On Thu, 4 Feb 2010, Marc Nicholas wrote: > >> >> The write IOPS between the X25-M and the X25-E are different since with >> the X25-M, much >> more of your data gets completely lost. Most of us prefer not to lose our >> data. >> >> Would you like to qualify your statement further? >> > > Google is your friend. And check earlier on this list/forum as well. > > While I understand the difference between MLC and SLC parts, I''m pretty >> sure Intel didn''t >> design the M version to make "data get completely lost". ;) >> > > It loses the most recently written data, even after a cache sync request. > A number of people have verified this for themselves and posted results. > Even the X25-E has been shown to lose some transactions. > > The devices have some DRAM (16MB) that is used for write amplificationlevelling. The sudden loss of power means that this DRAM doesn''t get flushed to Flash. This is the very reason the STEC devices have a supercap. -marc -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100204/6db5c603/attachment.html>