Roy Sigurd Karlsbakk
2010-Jun-11 15:09 UTC
[zfs-discuss] Crucial RealSSD C300 and cache flush?
Hi all Crucial RealSSD C300 has been released and showing good numbers for use as Zil and L2ARC. Does anyone know if this unit flushes its cache on request, as opposed to Intel units etc? Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
I''m interested in the answer to this as well. -- This message posted from opensolaris.org
Hi, Roy Sigurd Karlsbakk wrote:> Crucial RealSSD C300 has been released and showing good numbers for use as Zil and L2ARC. Does anyone know if this unit flushes its cache on request, as opposed to Intel units etc? >I had a chance to get my hands on a Crucial RealSSD C300/128MB yesterday and did some quick testing. Here are the numbers first, some explanation follows below: cache enabled, 32 buffers: Linear read, 64k blocks: 134 MB/s random read, 64k blocks: 134 MB/s linear read, 4k blocks: 87 MB/s random read, 4k blocks: 87 MB/s linear write, 64k blocks: 107 MB/s random write, 64k blocks: 110 MB/s linear write, 4k blocks: 76 MB/s random write, 4k blocks: 32 MB/s cache enabled, 1 buffer: linear write, 4k blocks: 51 MB/s (12800 ops/s) random write, 4k blocks: 7 MB/s (1750 ops/s) linear write, 64k blocks: 106 MB/s (1610 ops/s) random write, 64k blocks: 59 MB/s (920 ops/s) cache disabled, 1 buffer: linear write, 4k blocks: 4.2 MB/s (1050 ops/s) random write, 4k blocks: 3.9 MB/s (980 ops/s) linear write, 64k blocks: 40 MB/s (650 ops/s) random write, 64k blocks: 40 MB/s (650 ops/s) cache disabled, 32 buffers: linear write, 4k blocks: 4.5 MB/s, 1120 ops/s random write, 4k blocks: 4.2 MB/s, 1050 ops/s linear write, 64k blocks: 43 MB/s, 680 ops/s random write, 64k blocks: 44 MB/s, 690 ops/s cache enabled, 1 buffer, with cache flushes linear write, 4k blocks, flush after every write: 1.5 MB/s, 385 writes/s linear write, 4k blocks, flush after every 4th write: 4.2 MB/s, 1120 writes/s The numbers are rough numbers read quickly from iostat, so please don''t multiply block size by ops and compare with the bandwidth given ;) The test operates directly on top of LDI, just like ZFS. - "nk blocks" means the size of each read/write given to the device driver - "n buffers" means the number of buffers I keep in flight. This is to keep the command queue of the device busy - "cache flush" means a synchronous ioctl DKIOCFLUSHWRITECACHE These numbers contain a few surprises (at least for me). The biggest surprise is that with cache disabled one cannot get good data rates with small blocks, even if one keeps the command queue filled. This is completely different from what I''ve seen from hard drives. Also the IOPS with cache flushes is quite low, 385 is not much better than a 15k hdd, while the latter scales better. On the other hand, from the large drop in performance when using flushes one could infer that they indeed flush properly, but I haven''t built a test setup for that yet. Conclusion: From the measurements I''d infer the device makes a good L2ARC, but for a slog device the latency is too high and it doesn''t scale well. I''ll do similar tests on a x-25 and ocz vertex 2 pro as soon as they arrive. If there are numbers you are missing please tell me, I''ll measure them if possible. Also please ask if there are questions regarding the test setup. -- Arne
Looking forward to see your test report from intel x-25 and ocz vertex 2 pro... Thanks. Fred -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Arne Jansen Sent: ???, ?? 24, 2010 16:15 To: Roy Sigurd Karlsbakk Cc: OpenSolaris ZFS discuss Subject: Re: [zfs-discuss] Crucial RealSSD C300 and cache flush? Hi, Roy Sigurd Karlsbakk wrote:> Crucial RealSSD C300 has been released and showing good numbers for use as Zil and L2ARC. Does anyone know if this unit flushes its cache on request, as opposed to Intel units etc? >I had a chance to get my hands on a Crucial RealSSD C300/128MB yesterday and did some quick testing. Here are the numbers first, some explanation follows below: cache enabled, 32 buffers: Linear read, 64k blocks: 134 MB/s random read, 64k blocks: 134 MB/s linear read, 4k blocks: 87 MB/s random read, 4k blocks: 87 MB/s linear write, 64k blocks: 107 MB/s random write, 64k blocks: 110 MB/s linear write, 4k blocks: 76 MB/s random write, 4k blocks: 32 MB/s cache enabled, 1 buffer: linear write, 4k blocks: 51 MB/s (12800 ops/s) random write, 4k blocks: 7 MB/s (1750 ops/s) linear write, 64k blocks: 106 MB/s (1610 ops/s) random write, 64k blocks: 59 MB/s (920 ops/s) cache disabled, 1 buffer: linear write, 4k blocks: 4.2 MB/s (1050 ops/s) random write, 4k blocks: 3.9 MB/s (980 ops/s) linear write, 64k blocks: 40 MB/s (650 ops/s) random write, 64k blocks: 40 MB/s (650 ops/s) cache disabled, 32 buffers: linear write, 4k blocks: 4.5 MB/s, 1120 ops/s random write, 4k blocks: 4.2 MB/s, 1050 ops/s linear write, 64k blocks: 43 MB/s, 680 ops/s random write, 64k blocks: 44 MB/s, 690 ops/s cache enabled, 1 buffer, with cache flushes linear write, 4k blocks, flush after every write: 1.5 MB/s, 385 writes/s linear write, 4k blocks, flush after every 4th write: 4.2 MB/s, 1120 writes/s The numbers are rough numbers read quickly from iostat, so please don''t multiply block size by ops and compare with the bandwidth given ;) The test operates directly on top of LDI, just like ZFS. - "nk blocks" means the size of each read/write given to the device driver - "n buffers" means the number of buffers I keep in flight. This is to keep the command queue of the device busy - "cache flush" means a synchronous ioctl DKIOCFLUSHWRITECACHE These numbers contain a few surprises (at least for me). The biggest surprise is that with cache disabled one cannot get good data rates with small blocks, even if one keeps the command queue filled. This is completely different from what I''ve seen from hard drives. Also the IOPS with cache flushes is quite low, 385 is not much better than a 15k hdd, while the latter scales better. On the other hand, from the large drop in performance when using flushes one could infer that they indeed flush properly, but I haven''t built a test setup for that yet. Conclusion: From the measurements I''d infer the device makes a good L2ARC, but for a slog device the latency is too high and it doesn''t scale well. I''ll do similar tests on a x-25 and ocz vertex 2 pro as soon as they arrive. If there are numbers you are missing please tell me, I''ll measure them if possible. Also please ask if there are questions regarding the test setup. -- Arne _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Arne Jansen wrote:> Hi, > > Roy Sigurd Karlsbakk wrote: >> Crucial RealSSD C300 has been released and showing good numbers for use as Zil and L2ARC. Does anyone know if this unit flushes its cache on request, as opposed to Intel units etc? >> > > I had a chance to get my hands on a Crucial RealSSD C300/128MB yesterday and did > some quick testing. Here are the numbers first, some explanation follows below:After taemun alerted my that the linear read/write numbers are too low I found a bottleneck: the controller decided to connect the SSD with only 1.5GBit. I have to check if we can jumper it to least 3GBit. To connect it with 6GBit we need some new cables, so this might take some time. The main purpose of this test was to evaluate the SSD with respect to usage as a slog device and I think the connection speed doesn''t affect this. Nevertheless I''ll repeat the tests as soon as we solved the issues. Sorry. --Arne> > cache enabled, 32 buffers: > Linear read, 64k blocks: 134 MB/s > random read, 64k blocks: 134 MB/s > linear read, 4k blocks: 87 MB/s > random read, 4k blocks: 87 MB/s > linear write, 64k blocks: 107 MB/s > random write, 64k blocks: 110 MB/s > linear write, 4k blocks: 76 MB/s > random write, 4k blocks: 32 MB/s > > cache enabled, 1 buffer: > linear write, 4k blocks: 51 MB/s (12800 ops/s) > random write, 4k blocks: 7 MB/s (1750 ops/s) > linear write, 64k blocks: 106 MB/s (1610 ops/s) > random write, 64k blocks: 59 MB/s (920 ops/s) > > cache disabled, 1 buffer: > linear write, 4k blocks: 4.2 MB/s (1050 ops/s) > random write, 4k blocks: 3.9 MB/s (980 ops/s) > linear write, 64k blocks: 40 MB/s (650 ops/s) > random write, 64k blocks: 40 MB/s (650 ops/s) > > cache disabled, 32 buffers: > linear write, 4k blocks: 4.5 MB/s, 1120 ops/s > random write, 4k blocks: 4.2 MB/s, 1050 ops/s > linear write, 64k blocks: 43 MB/s, 680 ops/s > random write, 64k blocks: 44 MB/s, 690 ops/s > > cache enabled, 1 buffer, with cache flushes > linear write, 4k blocks, flush after every write: 1.5 MB/s, 385 writes/s > linear write, 4k blocks, flush after every 4th write: 4.2 MB/s, 1120 writes/s > > > The numbers are rough numbers read quickly from iostat, so please don''t > multiply block size by ops and compare with the bandwidth given ;) > The test operates directly on top of LDI, just like ZFS. > - "nk blocks" means the size of each read/write given to the device driver > - "n buffers" means the number of buffers I keep in flight. This is to keep > the command queue of the device busy > - "cache flush" means a synchronous ioctl DKIOCFLUSHWRITECACHE > > These numbers contain a few surprises (at least for me). The biggest surprise > is that with cache disabled one cannot get good data rates with small blocks, > even if one keeps the command queue filled. This is completely different from > what I''ve seen from hard drives. > Also the IOPS with cache flushes is quite low, 385 is not much better than > a 15k hdd, while the latter scales better. On the other hand, from the large > drop in performance when using flushes one could infer that they indeed flush > properly, but I haven''t built a test setup for that yet. > > Conclusion: From the measurements I''d infer the device makes a good L2ARC, > but for a slog device the latency is too high and it doesn''t scale well. > > I''ll do similar tests on a x-25 and ocz vertex 2 pro as soon as they arrive. > > If there are numbers you are missing please tell me, I''ll measure them if > possible. Also please ask if there are questions regarding the test setup. > > -- > Arne > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Arne Jansen wrote:> Hi, > > Roy Sigurd Karlsbakk wrote: >> Crucial RealSSD C300 has been released and showing good numbers for use as Zil and L2ARC. Does anyone know if this unit flushes its cache on request, as opposed to Intel units etc? >> > > Also the IOPS with cache flushes is quite low, 385 is not much better than > a 15k hdd, while the latter scales better. On the other hand, from the large > drop in performance when using flushes one could infer that they indeed flush > properly, but I haven''t built a test setup for that yet. >Result from cache flush test: While doing synchronous writes with full speed we pulled the device from the system and compared the contents afterwards. Result: no writes lost. We repeated the test several times. Cross check: we pulled also while writing with cache enabled, and it lost 8 writes. So I''d say, yes, it flushes its cache on request. -- Arne
David Dyer-Bennet
2010-Jun-24 14:30 UTC
[zfs-discuss] Crucial RealSSD C300 and cache flush?
On Thu, June 24, 2010 08:58, Arne Jansen wrote:> Cross check: we pulled also while writing with cache enabled, and it lost > 8 writes.I''m SO pleased to see somebody paranoid enough to do that kind of cross-check doing this benchmarking! "Benchmarking is hard!"> So I''d say, yes, it flushes its cache on request.Starting to sound pretty convincing, yes. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
Thanks Jens, I have a vdbench profile and script that will run the new SNIA Solid State Storage (SSS) Performance Test Suite (PTS). I''d be happy to share if anyone is interested. -- richard On Jul 28, 2011, at 7:10 AM, Jens Elkner wrote:> Hi, > > Roy Sigurd Karlsbakk wrote: >> Crucial RealSSD C300 has been released and showing good numbers for use as Zil and L2ARC. Does anyone know if this unit flushes its cache on request, as opposed to Intel units etc? >> > > I had a chance to get my hands on a Crucial RealSSD C300/128MB yesterday and did > some quick testing. Here are the numbers first, some explanation follows below: > > cache enabled, 32 buffers: > Linear read, 64k blocks: 134 MB/s > random read, 64k blocks: 134 MB/s > linear read, 4k blocks: 87 MB/s > random read, 4k blocks: 87 MB/s > linear write, 64k blocks: 107 MB/s > random write, 64k blocks: 110 MB/s > linear write, 4k blocks: 76 MB/s > random write, 4k blocks: 32 MB/s > > cache enabled, 1 buffer: > linear write, 4k blocks: 51 MB/s (12800 ops/s) > random write, 4k blocks: 7 MB/s (1750 ops/s) > linear write, 64k blocks: 106 MB/s (1610 ops/s) > random write, 64k blocks: 59 MB/s (920 ops/s) > > cache disabled, 1 buffer: > linear write, 4k blocks: 4.2 MB/s (1050 ops/s) > random write, 4k blocks: 3.9 MB/s (980 ops/s) > linear write, 64k blocks: 40 MB/s (650 ops/s) > random write, 64k blocks: 40 MB/s (650 ops/s) > > cache disabled, 32 buffers: > linear write, 4k blocks: 4.5 MB/s, 1120 ops/s > random write, 4k blocks: 4.2 MB/s, 1050 ops/s > linear write, 64k blocks: 43 MB/s, 680 ops/s > random write, 64k blocks: 44 MB/s, 690 ops/s > > cache enabled, 1 buffer, with cache flushes > linear write, 4k blocks, flush after every write: 1.5 MB/s, 385 writes/s > linear write, 4k blocks, flush after every 4th write: 4.2 MB/s, 1120 writes/s > > > The numbers are rough numbers read quickly from iostat, so please don''t > multiply block size by ops and compare with the bandwidth given ;) > The test operates directly on top of LDI, just like ZFS. > - "nk blocks" means the size of each read/write given to the device driver > - "n buffers" means the number of buffers I keep in flight. This is to keep > the command queue of the device busy > - "cache flush" means a synchronous ioctl DKIOCFLUSHWRITECACHE > > These numbers contain a few surprises (at least for me). The biggest surprise > is that with cache disabled one cannot get good data rates with small blocks, > even if one keeps the command queue filled. This is completely different from > what I''ve seen from hard drives. > Also the IOPS with cache flushes is quite low, 385 is not much better than > a 15k hdd, while the latter scales better. On the other hand, from the large > drop in performance when using flushes one could infer that they indeed flush > properly, but I haven''t built a test setup for that yet. > > Conclusion: From the measurements I''d infer the device makes a good L2ARC, > but for a slog device the latency is too high and it doesn''t scale well. > > I''ll do similar tests on a x-25 and ocz vertex 2 pro as soon as they arrive. > > If there are numbers you are missing please tell me, I''ll measure them if > possible. Also please ask if there are questions regarding the test setup. > > -- > Arne > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss