Hello, Does it make any sense to use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks? For example: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. I was thinking about using disks from Tray 1 as L2ARC for Tray 2 and put all of these disks in one (1) zfs storage pool. This pool would be used mainly as astronomical images repository, shared via NFS over a Sun Fire X2200. Is it worth to do? Thanks in advance for any help. Regards, Roger -- <http://www.sun.com> *Roger Solano* Arquitecto de Soluciones Regi?n ACC - Venezuela *Sun Microsystems, Inc.* Tel?fono: +58-212-905-3800 Fax: +58-212-905-3811 Email: Roger.Solano at Sun.COM INFORMACION: Este mensaje est? destinado para uso exclusivo del destinatario y puede contener material e informaci?n confidencial. Esta prohibida cualquier revisi?n, uso, publicaci?n o distribuci?n no autorizada del material o informaci?n. Si usted no es el destinatario correcto, por favor contactar a trav?s del correo electr?nico a la persona que env?a la comunicaci?n y destruya todas las copias del mensaje original. ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090505/3b3712f1/attachment.html>
Hello Roger, Tuesday, May 5, 2009, 9:07:22 PM, you wrote: > Hello, Does it make any sense to use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks? For example: A STK2540 storage array with this configuration: Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. I was thinking about using disks from Tray 1 as L2ARC for Tray 2 and put all of these disks in one (1) zfs storage pool. This pool would be used mainly as astronomical images repository, shared via NFS over a Sun Fire X2200. Is it worth to do? -- I guess your files are astronomically big :) But seriously - if your files are large and you expect to access them in large chunks sequentially then SATA drives can actually deliver same performance as your 15k SAS disks as you will be throughput bound. If it is the case than separating those 15ks as L2ARC probably doesn''t make sense. If you expect lots of writes and small random reads when a relatively large part of working set will fit into L2ARC then it might make a sense. It also depends on how you configure your sata drives - for example you will usually get more benefit from L2ARC if your pools is raid-z[2] compared, except the case of a single-stream large I/O sequantial reader to raid-10. In summary - it all depends on your workload... -- Best regards, Robert Milkowski http://milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks?perhaps... depends on the workload, and if the working set can live on the L2ARC > used mainly as astronomical images repository hmm, perhaps two trays of 1T SATA drives all mirrors rather than raidz sets of one tray. ie: pls don''t discount how one arranges the vdev in a given configuration. Rob
Roger Solano wrote:> Hello, > > Does it make any sense to use a bunch of 15K SAS drives as L2ARC > cache for several TBs of SATA disks? > > For example: > > A STK2540 storage array with this configuration: > > * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. >Alternatively, you can purchase non-Sun 500 GByte read-optimized SSDs. OTOH, if the disks are already in place, then you can experiment with L2ARC devices and, if you don''t like the results, zpool remove them from the pool. -- richard> * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. > > I was thinking about using disks from Tray 1 as L2ARC for Tray 2 and > put all of these disks in one (1) zfs storage pool. > > This pool would be used mainly as astronomical images repository, > shared via NFS over a Sun Fire X2200. > > Is it worth to do? > > Thanks in advance for any help. > > Regards, > Roger > > > -- > > <http://www.sun.com> *Roger Solano* > Arquitecto de Soluciones > Regi?n ACC - Venezuela > > > *Sun Microsystems, Inc.* > Tel?fono: +58-212-905-3800 > Fax: +58-212-905-3811 > Email: Roger.Solano at Sun.COM > INFORMACION: Este mensaje est? destinado para uso exclusivo del > destinatario y puede contener material e informaci?n > confidencial. Esta prohibida cualquier revisi?n, uso, publicaci?n o > distribuci?n no autorizada del material o informaci?n. Si usted no es > el destinatario correcto, por favor contactar a trav?s del correo > electr?nico a la persona que env?a la comunicaci?n y destruya todas > las copias del mensaje original. > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Roger Solano wrote:> Hello, > > Does it make any sense to use a bunch of 15K SAS drives as L2ARC > cache for several TBs of SATA disks? > > For example: > > A STK2540 storage array with this configuration: > > * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. > * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. >Just thought I would point out that these are hardware backed RAID arrays. You might be better off using the J4200 instead for this so ZFS can manage the disks completely as well. Will probably be cheaper too! The savings could be put towards some SSD''s or more system RAM for L1ARC.> I was thinking about using disks from Tray 1 as L2ARC for Tray 2 and > put all of these disks in one (1) zfs storage pool. > > This pool would be used mainly as astronomical images repository, > shared via NFS over a Sun Fire X2200. > > Is it worth to do? > > Thanks in advance for any help. > > Regards, > Roger > > > -- > > <http://www.sun.com> *Roger Solano* > Arquitecto de Soluciones > Regi?n ACC - Venezuela > > > *Sun Microsystems, Inc.* > Tel?fono: +58-212-905-3800 > Fax: +58-212-905-3811 > Email: Roger.Solano at Sun.COM > INFORMACION: Este mensaje est? destinado para uso exclusivo del > destinatario y puede contener material e informaci?n > confidencial. Esta prohibida cualquier revisi?n, uso, publicaci?n o > distribuci?n no autorizada del material o informaci?n. Si usted no es > el destinatario correcto, por favor contactar a trav?s del correo > electr?nico a la persona que env?a la comunicaci?n y destruya todas > las copias del mensaje original. > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Thu, 7 May 2009, Scott Lawson wrote:>> >> A STK2540 storage array with this configuration: >> >> * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. >> * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. >> > Just thought I would point out that these are hardware backed RAID > arrays. You might be better off using the J4200 > instead for this so ZFS can manage the disks completely as well. Will > probably be cheaper too! The savings could > be put towards some SSD''s or more system RAM for L1ARC.Something nice about the STK2540 solution is that if the server system dies. The STK2540''s can quickly be swung over to another system via a quick ''zfs import''. If SSDs are embedded inside the server system then it is necessary to physically move the log devices to the new system. The issue about how to quickly recover after the server dies seems to rarely be discussed here. Embedded log devices tend to make issues more complex. A dumb SAS array is certainly much cheaper and will perform at least as well, but it does seem like these newfangled embedded log devices cause an issue when maximum availability is desired. With SAS it is necessary to physically swing the cables to the replacement server and of course the replacement server needs to be very close by. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Thu, 7 May 2009, Scott Lawson wrote: >>> >>> A STK2540 storage array with this configuration: >>> >>> * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. >>> * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. >>> >> Just thought I would point out that these are hardware backed RAID >> arrays. You might be better off using the J4200 >> instead for this so ZFS can manage the disks completely as well. Will >> probably be cheaper too! The savings could >> be put towards some SSD''s or more system RAM for L1ARC. > > Something nice about the STK2540 solution is that if the server system > dies. The STK2540''s can quickly be swung over to another system via a > quick ''zfs import''.Sure provided they have it attached to a fibre channel switch or have a nice long fibre lead. The difference is negligible other than cost. Roger replied off list and mentioned customer has the 2540 already, so my suggestion is moot anyway for him. FYI. I have relocated zpools both ways, with SAN attached 3510, 11''s, 6140''s and SAS attached J4500''s. Both ways work just fine. One is cheaper. ;) Being that he was mentioning astronomical data, which I know is large datasets I just thought I would point it out the 2540 probably wouldn''t offer the best bang for buck for this NFS server. Thats all.> If SSDs are embedded inside the server system then it is necessary to > physically move the log devices to the new system.It is possible to buy these J series JBODS with bundled SSD''s as well right now. Log device would be contained in this chassis which would facilitate easy importing and exporting in the case of a system shift being required.> > The issue about how to quickly recover after the server dies seems to > rarely be discussed here. Embedded log devices tend to make issues > more complex. > > A dumb SAS array is certainly much cheaper and will perform at least > as well, but it does seem like these newfangled embedded log devices > cause an issue when maximum availability is desired. With SAS it is > necessary to physically swing the cables to the replacement server and > of course the replacement server needs to be very close by. > > Bob > -- > Bob Friesenhahn > bfriesen at simple.dallas.tx.us, > http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Thu, 7 May 2009, Scott Lawson wrote:>> Something nice about the STK2540 solution is that if the server system >> dies. The STK2540''s can quickly be swung over to another system via a quick >> ''zfs import''. > Sure provided they have it attached to a fibre channel switch or > have a nice long fibre lead. The difference is negligibleThe 2540 has four fiber connections so two hosts can have directly connected duplex fiber. Of course only one host can be allowed to use the zfs pool at a time. Provided that the failed server is really down, it should take only seconds to import the pool on the backup server. Ideally a mobile IP address is pre-arranged to allow the NFS service to be moved at the same time.> It is possible to buy these J series JBODS with bundled SSD''s as well right > now. Log device would be contained in this > chassis which would facilitate easy importing and exporting in the case of a > system shift being required.Embedded SSDs in the JBOD array would surely help with this problem. The "DIMM" solution we heard about recently on this list would pose a problem since the "DIMM" device would need to be moved and the secondary server would need to have a slot for it. After all this discussion, I am not sure if anyone adequately answered the original poster''s question as to whether at 2540 with SAS 15K drives would provide substantial synchronous write throughput improvement when used as a L2ARC device. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On May 6, 2009, at 20:46, Bob Friesenhahn wrote:> After all this discussion, I am not sure if anyone adequately > answered the original poster''s question as to whether at 2540 with > SAS 15K drives would provide substantial synchronous write > throughput improvement when used as a L2ARC device.I was under the impression that the L2ARC was to speed up reads, as it allows things to be cached on something faster than disks (usually MLC SSDs). Offloading the ZIL is what handles synchronous writes, isn''t it? How would adding an L2ARC speed up writes?
>> After all this discussion, I am not sure if anyone adequately answered the >> original poster''s question as to whether at 2540 with SAS 15K drives would >> provide substantial synchronous write throughput improvement when used as >> a L2ARC device. > > I was under the impression that the L2ARC was to speed up reads, as it > allows things to be cached on something faster than disks (usually MLC > SSDs). Offloading the ZIL is what handles synchronous writes, isn''t it? > > How would adding an L2ARC speed up writes?You''re absolutely right. The L2ARC is for accelerating reads only and will not affect write performance. Adam -- Adam Leventhal, Fishworks http://blogs.sun.com/ahl
On 7 mai 09, at 04:03, Adam Leventhal wrote:>>> After all this discussion, I am not sure if anyone adequately >>> answered the >>> original poster''s question as to whether at 2540 with SAS 15K >>> drives would >>> provide substantial synchronous write throughput improvement when >>> used as >>> a L2ARC device. >> >> I was under the impression that the L2ARC was to speed up reads, as >> it >> allows things to be cached on something faster than disks (usually >> MLC >> SSDs). Offloading the ZIL is what handles synchronous writes, isn''t >> it? >> >> How would adding an L2ARC speed up writes? > > You''re absolutely right. The L2ARC is for accelerating reads only > and will > not affect write performance.With the small caveat that if the bulk of your read traffic is being served by the L2ARC, that means there is much less contention for access to the slower physical disks freeing them up for write activity. No speed increase in the technical sense over and above the capabilities of the disks, but it should have an impact in real world IO activity. Cheers, Erik