[This email is either empty or too large to be displayed at this time]
Trying again: Intel just released those drives. Any thoughts on how nicely they will play in a zfs/hardware raid setup?
On 2012-11-13 22:56, Mauricio Tavares wrote:> Trying again: > > Intel just released those drives. Any thoughts on how nicely they will > play in a zfs/hardware raid setup?Seems interesting - fast, assumed reliable and consistent in its IOPS (according to marketing talk), addresses power loss reliability (acc. to datasheet): * Endurance Rating - 10 drive writes/day over 5 years while running JESD218 standard * The Intel SSD DC S3700 supports testing of the power loss capacitor, which can be monitored using the following SMART attribute: (175, AFh). Somewhat affordably priced (at least in the volume market for shops that buy hardware in cubic meters ;) http://newsroom.intel.com/community/intel_newsroom/blog/2012/11/05/intel-announces-intel-ssd-dc-s3700-series--next-generation-data-center-solid-state-drive-ssd http://download.intel.com/newsroom/kits/ssd/pdfs/Intel_SSD_DC_S3700_Product_Specification.pdf All in all, I can''t come up with anything offensive against it quickly ;) One possible nit regards the ratings being geared towards 4KB block (which is not unusual with SSDs), so it may be further from announced performance with other block sizes - i.e. when caching ZFS metadata. Thanks for bringing it into attention spotlight, and I hope the more savvy posters would overview it better. //Jim
Anandtech.com has a thorough review of it. Performance is consistent (within 10-15% IOPS) across the lifetime of the drive, has capacitors to flush RAM cache to disk, and doesn''t store user data in the cache. It''s also cheaper per GB than the 710 it replaces. On 2012-11-13 3:32 PM, "Jim Klimov" <jimklimov at cos.ru> wrote:> On 2012-11-13 22:56, Mauricio Tavares wrote: > >> Trying again: >> >> Intel just released those drives. Any thoughts on how nicely they will >> play in a zfs/hardware raid setup? >> > > Seems interesting - fast, assumed reliable and consistent in its IOPS > (according to marketing talk), addresses power loss reliability (acc. > to datasheet): > > * Endurance Rating - 10 drive writes/day over 5 years while running > JESD218 standard > > * The Intel SSD DC S3700 supports testing of the power loss capacitor, > which can be monitored using the following SMART attribute: (175, AFh). > > Somewhat affordably priced (at least in the volume market for shops > that buy hardware in cubic meters ;) > > http://newsroom.intel.com/**community/intel_newsroom/blog/** > 2012/11/05/intel-announces-**intel-ssd-dc-s3700-series--** > next-generation-data-center-**solid-state-drive-ssd<http://newsroom.intel.com/community/intel_newsroom/blog/2012/11/05/intel-announces-intel-ssd-dc-s3700-series--next-generation-data-center-solid-state-drive-ssd> > > http://download.intel.com/**newsroom/kits/ssd/pdfs/Intel_** > SSD_DC_S3700_Product_**Specification.pdf<http://download.intel.com/newsroom/kits/ssd/pdfs/Intel_SSD_DC_S3700_Product_Specification.pdf> > > All in all, I can''t come up with anything offensive against it quickly ;) > One possible nit regards the ratings being geared towards 4KB block > (which is not unusual with SSDs), so it may be further from announced > performance with other block sizes - i.e. when caching ZFS metadata. > > Thanks for bringing it into attention spotlight, and I hope the more > savvy posters would overview it better. > > //Jim > ______________________________**_________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss<http://mail.opensolaris.org/mailman/listinfo/zfs-discuss> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121113/9dc142ea/attachment.html>
On Wed, Nov 14 at 0:28, Jim Klimov wrote:>All in all, I can''t come up with anything offensive against it >quickly ;) One possible nit regards the ratings being geared towards >4KB block >(which is not unusual with SSDs), so it may be further from announced >performance with other block sizes - i.e. when caching ZFS metadata.Would an ashift of 12 conceivably address that issue? -- Eric D. Mudama edmudama at bounceswoosh.org
On 2012-11-14 18:05, Eric D. Mudama wrote:> On Wed, Nov 14 at 0:28, Jim Klimov wrote: >> All in all, I can''t come up with anything offensive against it quickly >> ;) One possible nit regards the ratings being geared towards 4KB block >> (which is not unusual with SSDs), so it may be further from announced >> performance with other block sizes - i.e. when caching ZFS metadata. > > Would an ashift of 12 conceivably address that issue?Performance-wise (and wear-wise) - probably. Gotta test how bad it is at 512b IOs ;) Also I am not sure if ashift applies to (can be set for) L2ARC cache devices... Actually, if read performance does not happen to suck at smaller block sizes, ashift is not needed - the L2ARC writes seem to be streamed sequentially (as in an infinite tape) so smaller writes would still coalesce into big HW writes and not cause excessive wear by banging many random flash cells. IMHO :) //Jim
On 11/14/12 12:28, Jim Klimov wrote:> On 2012-11-13 22:56, Mauricio Tavares wrote: >> Trying again: >> >> Intel just released those drives. Any thoughts on how nicely they will >> play in a zfs/hardware raid setup? > Seems interesting - fast, assumed reliable and consistent in its IOPS > (according to marketing talk), addresses power loss reliability (acc. > to datasheet): > > * Endurance Rating - 10 drive writes/day over 5 years while running > JESD218 standard > > * The Intel SSD DC S3700 supports testing of the power loss capacitor, > which can be monitored using the following SMART attribute: (175, AFh). > > <snip> > > All in all, I can''t come up with anything offensive against it quickly > ;) One possible nit regards the ratings being geared towards 4KB block > (which is not unusual with SSDs), so it may be further from announced > performance with other block sizes - i.e. when caching ZFS metadata.I can''t help thinking these drives would be overkill for an ARC device. All of the expensive controller hardware is geared to boosting random write IOPs, which somewhat wasted on a write slowly, read often device. The enhancements would be good for a ZIL, but the smallest drive is at least an order of magnitude too big... -- Ian.
On 2012-11-21 21:55, Ian Collins wrote:> I can''t help thinking these drives would be overkill for an ARC device. > All of the expensive controller hardware is geared to boosting random > write IOPs, which somewhat wasted on a write slowly, read often device. > The enhancements would be good for a ZIL, but the smallest drive is at > least an order of magnitude too big...I think, given the write-endurance and powerloss protection, these devices might make for good pool devices - whether for an SSD-only pool, or for an rpool+zil(s) mirrors with main pools (and likely L2ARCs, yes) being on different types of devices. //Jim
On Wed, November 21, 2012 16:06, Jim Klimov wrote:> On 2012-11-21 21:55, Ian Collins wrote: >> I can''t help thinking these drives would be overkill for an ARC device. >> All of the expensive controller hardware is geared to boosting random >> write IOPs, which somewhat wasted on a write slowly, read often device. >> The enhancements would be good for a ZIL, but the smallest drive is at >> least an order of magnitude too big... > > I think, given the write-endurance and powerloss protection, these > devices might make for good pool devices - whether for an SSD-only > pool, or for an rpool+zil(s) mirrors with main pools (and likely > L2ARCs, yes) being on different types of devices.Or partition them. While general best practices encourage using the whole device for either L2ARC or ZIL, it doesn''t always have to be the case.