The standard controller that has been recommended in the past is the AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several mentions of LSI based controllers on the mailing lists and I''m wondering about them. One obvious difference is that the Marvel contoller is PCI-X and the LSI controllers are PCI-E. Supermicro have several LSI controllers. AOC-USASLP-L8i with the LSI 1068E and AOC-USASLP-H8iR with the LSI 1078. Are they stable? How does is the performance compare to the Marvel? Does hot swap cause problems? The LSI1068E has 16MB SRAM onboard cache - I expect this helps performances, but does it causes issues with ZIL? The LSI1078 has 512MB DDR2 onboard cache with a battery backup option. With OpenSolaris will this function as a NVRAM when using the battery backup option, allowing "zil disable"? Has anyone tested this? Is it useful at all? There is also a Intel IOP348 8 port controller with 256MB DDR2 and battery backup. Does this work with OpenSolaris? How about the battery/cache function? Or is it better to consider SSD caches instead of the above for zfs? I''m building a new system and any thoughts on the above would be useful. Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090411/c6b01ade/attachment.html>
Forgot to include links. See below. Thanks. On Sat, Apr 11, 2009 at 8:35 PM, Nicholas Lee <emptysands at gmail.com> wrote:> > Supermicro have several LSI controllers. AOC-USASLP-L8i with the LSI 1068E > and AOC-USASLP-H8iR with the LSI 1078. >http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm http://www.supermicro.com/products/accessories/addon/AOC-USASLP-H8iR.cfm> There is also a Intel IOP348 8 port controller with 256MB DDR2 and battery > backup. Does this work with OpenSolaris? How about the battery/cache > function? >http://www.supermicro.com/products/accessories/addon/AOC-USASLP-S8i_R.cfm -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090411/c808a91e/attachment.html>
On Sat, Apr 11, 2009 at 08:35:08PM +1200, Nicholas Lee wrote:> Supermicro have several LSI controllers. AOC-USASLP-L8i with the LSI 1068E > and AOC-USASLP-H8iR with the LSI 1078. > > Are they stable? How does is the performance compare to the Marvel? Does > hot swap cause problems?I''ve run both the 1068 and 1078 LSI based cards. They''ve both been nothing but stable and fast. I haven''t tried hotswap yet since it''s "not an option" as in both uses I would have had to open the case to get to the drives which would have caused the machines to power off.> The LSI1068E has 16MB SRAM onboard cache - I expect this helps > performances, but does it causes issues with ZIL?It never caused me any issues.> The LSI1078 has 512MB DDR2 onboard cache with a battery backup option. > With OpenSolaris will this function as a NVRAM when using the battery > backup option, allowing "zil disable"? Has anyone tested this? Is it useful > at all?I''m running 4 73G 15K RPM SAS disks in two hardware RAID0 arrays that are mirrored by ZFS. I won''t ever do ''zil disable'' so I can''t speak to that, but without any sort of tweaking, I can pull a solid 200MB/sec off of them. I don''t know if the cache will kick in if you were to do something like a single disk in a RAID0 or if that''s even possible, but one day I hope to get a chance to try that.> Or is it better to consider SSD caches instead of the above for zfs?One day I''ll have the money for an SSD or two and I''ll let you know. Don''t hold you breath. ;) -brian -- "Coding in C is like sending a 3 year old to do groceries. You gotta tell them exactly what you want or you''ll end up with a cupboard full of pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
Alan Batie
2009-Apr-11 18:28 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
Nicholas Lee wrote:> > The standard controller that has been recommended in the past is the > AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several > mentions of LSI based controllers on the mailing lists and I''m wondering > about them.We tried the Marvel controller, and it does not handle hot swapping well; the LSI 1068 does seem to work well (I''m using it on an Asus P5BV/SAS mb), though there is some concern about a higher error rate I''ve seen elsewhere (and iostat seems to be corroborating, though the question is is it just reporting what everthing else is seeing too? Everything seems to be working normally...). Watch out for UIO controllers --- I got one and it turns out they are reversed and don''t fit in normal chassis. A friend of mine is using the LSI SAS3801E (http://www.newegg.com/Product/Product.aspx?Item=N82E16816118076) and seems to like it... -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3263 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090411/a5f40eef/attachment.bin>
Tim
2009-Apr-11 22:23 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
On Sat, Apr 11, 2009 at 1:28 PM, Alan Batie <alan at batie.org> wrote:> Nicholas Lee wrote: > > > > The standard controller that has been recommended in the past is the > > AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several > > mentions of LSI based controllers on the mailing lists and I''m wondering > > about them. > > We tried the Marvel controller, and it does not handle hot swapping > well; the LSI 1068 does seem to work well (I''m using it on an Asus > P5BV/SAS mb), though there is some concern about a higher error rate > I''ve seen elsewhere (and iostat seems to be corroborating, though the > question is is it just reporting what everthing else is seeing too? > Everything seems to be working normally...). Watch out for UIO > controllers --- I got one and it turns out they are reversed and don''t > fit in normal chassis. A friend of mine is using the LSI SAS3801E > (http://www.newegg.com/Product/Product.aspx?Item=N82E16816118076) and > seems to like it... > >UIO fits just fine in a normal chassis, you just have to remove the bracket. It''s a standard PCIe card with a backwards bracket. Now if it''s got external facing ports you''ll be in trouble, but for an internal sas card, it''s really not a big deal. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090411/33c87c54/attachment.html>
Nicholas Lee
2009-Apr-13 00:32 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
On Sun, Apr 12, 2009 at 7:24 PM, Miles Nordin <carton at ivy.net> wrote:> > nl> Supermicro have several LSI controllers. AOC-USASLP-L8i with > nl> the LSI 1068E > > That''s what I''m using. It uses the proprietary mpt driver. > > nl> and AOC-USASLP-H8iR with the LSI 1078. > > I''m not using this. > > nl> How does is the performance compare to the Marvel? > > don''t know, but the proprietary Marvell driver uses the SATA > framework, and the LSI proprietary driver attaches like a SCSI > controller without using SATA framework. If you are trying to burn > CD''s or play DVD''s or use ''smartctl'' on your hard disks, I''m not sure > if it will work with LSI.Disk storage only. I usually use USB cdroms for servers if I need them.> nl> The LSI1068E has 16MB SRAM onboard cache - I expect this helps > nl> performances, but does it causes issues with ZIL? > > no, it is just sillyness. It''s just part of the controller/driver, > not something to worry about. >I guess when you think, it is actually smaller (now) than the cache on many HDDs. Probably a waste of space.> > nl> The LSI1078 has 512MB DDR2 onboard cache with a battery backup > nl> option. > > yeah, without the battery the onboard cache may be a liability rather > than an asset. You will have to worry if the card is unsafely > offering to store things in this volatile cache. I''m not sure how it > works out in practice. >I guess this is my main point of worry about this card. 1. Is the cache only used for RAID modes and not in JBOD mode? 2. If it is used by the controller is it driver dependant? Only works if the driver can handle the cache 3. If the cache does work what happens if there is a power reset? - In the first case if it is driver independent, and simply does a cache to disk flush of IO commands on power restart would cause corruption with zfs? - In the second case, similar to the first case but is it now dependant on the driver? How stable is the driver? Is corruption a more likely event? 4. In either case the option to turn off the cache might be important. 5. Furthermore, without a battery you might also desire to turn off the battery.> I think the battery-backed caches are much cheaper than a SSD slog, > and the bandwidth to the cache is much higher than bandwidth to a > single SATA port too. I don''t like it, though, because data collects > inside the cache which I can''t get out. OTOH, slog plus data disks I > can easily move from one machine to another while diagnosing a > problem, if i suspect a motherboard or the LSI card itself is bad, for > example. >I agree with your points. Even though an iRAM device seems like a hack, without good information about the stability of controller base cache they seem like the more portable solution. nl> using the battery backup option, allowing "zil disable"?> > please reread the best practices. I think you''re confusing two > different options and planning to do something unsafe. >Sorry, I meant zfs_nocacheflush - which should only be used when NVRAM is available or a secure power supply.> t> UIO fits just fine in a normal chassis, you just have to > t> remove the bracket. [...] it''s really not a big deal. > > +1, that supermicro card, Nicholas, is UIO rather than PCIe, and it > does work for me in plain PCIe slot with the bracket removed. so long > as you are not moving around the machine too much, I agree it''s not a > big deal. >I plan to use supermicro chassis in a rack - so both a m/b designed for UIO and in a stable location. Should be in fine. Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090413/51e74b1d/attachment.html>
Nicholas Lee
2009-Apr-13 05:02 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
On Mon, Apr 13, 2009 at 3:27 PM, Miles Nordin <carton at ivy.net> wrote:> >>>>> "nl" == Nicholas Lee <emptysands at gmail.com> writes: > > nl> 1. Is the cache only used for RAID modes and not in JBOD > nl> mode? > > well, there are different LSI cards and firmwares and drivers, but: > > The X4150 SAS RAID controllers will use the on-board battery backed cache > even when disks are presented as individual LUNs. > -- "Aaron Blew" <aaronblew at gmail.com> > Wed, 3 Sep 2008 15:29:29 -0700 > > We''re using an Infortrend SATA/SCSI disk array with individual LUNs, but > it still uses the disk cache. > -- Tomas ?gren <stric at acc.umu.se> > Thu, 4 Sep 2008 10:20:30 +0200 > > nl> 2. If it is used by the controller is it driver > nl> dependant? Only works if the driver can handle the cache > > driver is proprietary. :) no way to know. > > nl> 3. If the cache does work what happens if there is a power > nl> reset? > > Obviously it is supposed to handle this. But, yeah, as you said, > _when_ is the battery-backed cache flushed? At boot during the BIOS > probe? What if you''re using SPARC and don''t do a BIOS probe? by the > driver? When the ``card''s firmware boots?'''' How can you tell if the > cache has got stuff in it or not? What if you''re doing maintenance > like replacing disks---something not unlikely to coincide with unclean > shutdowns. Will this confuse it? >I didn''t think about this scenario. zfs handles so much of what once would have been done in hardware and by drivers. While this is good, it is leaving this huge grey area where it is hard for those of us on the front line to make decisions about best choices.> The driver and the ``firmware'''' is all proprietary, so there''s no way > to look into the matter yourself other than exhaustive testing, and > there''s no vendor standing squarely behind the overall system like > there is with an external array. > > but...it''s so extremely cheap and fast that I think there''s a hugeThat;s the big point. 10,000 USD for a 2U 12 disk 10TB raw NAS or 100,000 USD for the equalivent appliance.> segment of market, the segment which cares about being extremely cheap > and fast, that uses this stuff as a matter of course. I guess these > are the guys who were supposed to start using ZFS but for now I guess > the hardware cache is still faster for ``hardware'''' raid-on-a-card. >> I think the ideal device would have a fully open-source driver stack, > and a light on the SSD slog, or battery+RAM, or supercap+RAM+CF, to > indicate if it''s empty or not. If it''s missing and not empty then the > pool will always refuse to auto-import but always import if > ``forced'''', and if it''s missing and empty then the pool will sometimes > auto-import (ex., always if there was a clean shutdown and sometimes > if there wasn''t), and if forced to import when the light''s out the > pool will be fsync-consistent. Currently we''re short of the ideal > even using the ZFS-style slog, but AIUI you can get closer if you make > a backup of your empty slog right after you attach it and stash the > .dd.gz file somewhere outside the pool---you can force the import of a > pool with a dirty, missing slog by substituting an old empty slog with > the right label on it. However, still closed driver, still nothing > with fancy lights on it. :) >The only issue I have with slog-type devices at the moment is that they are not removable and thus easily replaceable. Seems if you want a production system using slogs then you must mirror them - otherwise if the slog is corrupted you can only revert to a backup.> > nl> iRAM device seems like a hack, > > There''s also the ACARD device: > > acard ANS-9010B $250 > plus 8GB RAM $86 > plus 16GB CF $44 > > It''s also got a battery but can dump/restore the RAM to a CF card. > It''s physically larger and not cheaper nor faster than Intel X25E but > at least it doesn''t have the fragmentation problems to worry about. > I''ve not tested it myself. Someone on the list tested it, but IIRC he > did not use it as a slog, nor comment on how the CF dumping feature > works (it sounds kind of sketchy. ``buttons'''' are involved, which to > me sounds very bad). >I''ve seen these before, but dismissed them as they are 5.25" units which is tricky in rack systems which generally only catered for 3.5". I wonder if it is possible to pull these apart and put them in a smaller case. Has anyone done any specific testing with SSD devices and solaris other than the FISHWORKS stuff? Which is better for what - SLC and MLC? Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090413/59d41dc9/attachment.html>
Miles Nordin
2009-Apr-13 17:05 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
>>>>> "nl" == Nicholas Lee <emptysands at gmail.com> writes:nl> zfs handles so much of what once would have been done in nl> hardware and by drivers. While this is good, it is leaving nl> this huge grey area where it is hard for those of us on the nl> front line well that''s not what I meant though. The battery RAM cache''s behavior can''t be determined by RTFS whether you use ZFS or not, and the behavior matters to both ZFS users and non ZFS users. The advantage I saw to ZFS slogs, is that you can inspect the source (and blogs about the source) to determine lots of details about how slogs behave including answers to the questions above. Better yet, with some skill you can change the answers to suit yourself. This process leads to some mix of good behavior and well-understood errata, while the other process leads to frustration, war stories, and cargo-cult maintenance ``procedures''''. >> it''s so extremely cheap nl> That;s the big point. 10,000 USD for a 2U 12 disk 10TB raw nl> NAS or 100,000 USD for the equalivent appliance. here I was talking again about battery-RAM RAID-on-a-card vs. slog. The battery-RAM is maybe a little bit less extra cost than X25E or ACARD, like $250 instead of $500, plus it doesn''t consume a drive slot. This amount of cost edge matters to people with large clusters, or to the rented-hardware hosting business. If both firmware and driver for the LSI 1078 cards were open source, I bet we could turn the RAM on the LSI cards into a slog. This would be better than a SATA slog because it''d have full PCIe bandwidth and low-latency to the RAM instead of just SATA. but it''s all proprietary, so I''m going with the slog. and keeping an eye on FreeBSD, since maybe if they manage to get ZFS working well in 8.0 I can finally get ZFS on top of proper drivers for the disk controller card and SCSI mid-layer. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090413/b78efd25/attachment.bin>
Will Murnane
2009-Apr-13 17:57 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
On Mon, Apr 13, 2009 at 01:02, Nicholas Lee <emptysands at gmail.com> wrote:>> There''s also the ACARD device: >> >> acard ANS-9010B ? ? ? ? ? ? ? ? $250 >> ?plus 8GB RAM ? ? ? ? ? ? ? ? ? ?$86 >> ?plus 16GB CF ? ? ? ? ? ? ? ? ? ?$44 >> >> It''s also got a battery but can dump/restore the RAM to a CF card. >> It''s physically larger and not cheaper nor faster than Intel X25E but >> at least it doesn''t have the fragmentation problems to worry about. >> I''ve not tested it myself. ?Someone on the list tested it, but IIRC he >> did not use it as a slog, nor comment on how the CF dumping feature >> works (it sounds kind of sketchy. ?``buttons'''' are involved, which to >> me sounds very bad). > > I''ve seen these before, but dismissed them as they are 5.25" units which is > tricky in rack systems which generally only catered for 3.5". ? I wonder if > it is possible to pull these apart and put them in a smaller case.Speaking for the ACARD unit only, no. The circuit board occupies the whole area of the 5.25" bay, and the memory is standing straight up, which makes the thing necessarily taller than a 3.5" drive. Perhaps with the help of an EE you could work around these limitations... but at that point you might as well design your own device and avoid the limitations of the ACARD design. It only does 200 MB/s or so on one sata port, and to my mind that''s inexcusable for a memory-based product.> Has anyone done any specific testing with SSD devices and solaris other than > the FISHWORKS stuff? ?Which is better for what - SLC and MLC?My impression is that the flash controllers make a much bigger difference than the type of flash inside. You should take a look at AnandTech''s review of the new OCZ Vertex drives [1], which has a fairly comprehensive set of benchmarks. I don''t think any of the products they review are really optimal choices, though; the Intel X25-E drives look good until you see the price tag, and even they only do 30-odd MB/s random writes. Will [1]: http://anandtech.com/storage/showdoc.aspx?i=3531&p=21
Nicholas Lee
2009-Apr-15 12:20 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
2009/4/14 Miles Nordin <carton at ivy.net>> > well that''s not what I meant though. The battery RAM cache''s behavior > can''t be determined by RTFS whether you use ZFS or not, and the > behavior matters to both ZFS users and non ZFS users. The advantage I > saw to ZFS slogs, is that you can inspect the source (and blogs about > the source) to determine lots of details about how slogs behave > including answers to the questions above. Better yet, with some skill > you can change the answers to suit yourself. This process leads to > some mix of good behavior and well-understood errata, while the other > process leads to frustration, war stories, and cargo-cult maintenance > ``procedures''''. >This is a good point. Performance vs reliability is the constant struggle, but if you don''t have the right information you just have to go with what you know. >> it''s so extremely cheap> > nl> That;s the big point. 10,000 USD for a 2U 12 disk 10TB raw > nl> NAS or 100,000 USD for the equalivent appliance. > > here I was talking again about battery-RAM RAID-on-a-card vs. slog. > The battery-RAM is maybe a little bit less extra cost than X25E or > ACARD, like $250 instead of $500, plus it doesn''t consume a drive > slot. This amount of cost edge matters to people with large clusters, > or to the rented-hardware hosting business.Problem is the ACARD units while maybe being cheap are not useful - 5.25" form factor is no good in any rack based storage system.> > If both firmware and driver for the LSI 1078 cards were open source, I > bet we could turn the RAM on the LSI cards into a slog. This would be > better than a SATA slog because it''d have full PCIe bandwidth and > low-latency to the RAM instead of just SATA. but it''s all > proprietary, so I''m going with the slog. and keeping an eye on > FreeBSD, since maybe if they manage to get ZFS working well in 8.0 I > can finally get ZFS on top of proper drivers for the disk controller > card and SCSI mid-layer. >As you say, without detail a good slog is really the only option. Does anyone know if the Fusion-io cards have solaris drivers yet? Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090416/3a295196/attachment.html>
Nicholas Lee
2009-Apr-15 12:28 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
On Tue, Apr 14, 2009 at 5:57 AM, Will Murnane <will.murnane at gmail.com>wrote:> > > Has anyone done any specific testing with SSD devices and solaris other > than > > the FISHWORKS stuff? Which is better for what - SLC and MLC? > My impression is that the flash controllers make a much bigger > difference than the type of flash inside. You should take a look at > AnandTech''s review of the new OCZ Vertex drives [1], which has a > fairly comprehensive set of benchmarks. I don''t think any of the > products they review are really optimal choices, though; the Intel > X25-E drives look good until you see the price tag, and even they only > do 30-odd MB/s random writes. >Couple excellent articles about SSD from adandtech last month: http://www.anandtech.com/showdoc.aspx?i=3532 - SSD versus Enterprise SAS and SATA disks (20/3/09) http://www.anandtech.com/storage/showdoc.aspx?i=3531 - The SSD Anthology: Understanding SSDs and New Drives from OCZ (18/3/09) And it looks like the Intel fragmentation issue is fixed as well: http://techreport.com/discussions.x/16739 It''s a shame the Sun Writezilla devices are almost 10k USD - seems there are the only units "on the market" that (apart from cost) work "well all around" as slog devices - form factor, interface/drivers and performance. How much of an issue does the random write bandwidth limit have on a slog device? What about latency? I would have thought the write traffic pattern for slog io was more sequential and bursty. Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090416/407b57d5/attachment.html>
Blake Irvin
2009-Apr-15 13:29 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
On Apr 15, 2009, at 8:28 AM, Nicholas Lee <emptysands at gmail.com> wrote:> > > On Tue, Apr 14, 2009 at 5:57 AM, Will Murnane > <will.murnane at gmail.com> wrote: > > > Has anyone done any specific testing with SSD devices and solaris > other than > > the FISHWORKS stuff? Which is better for what - SLC and MLC? > My impression is that the flash controllers make a much bigger > difference than the type of flash inside. You should take a look at > AnandTech''s review of the new OCZ Vertex drives [1], which has a > fairly comprehensive set of benchmarks. I don''t think any of the > products they review are really optimal choices, though; the Intel > X25-E drives look good until you see the price tag, and even they only > do 30-odd MB/s random writes. > > > Couple excellent articles about SSD from adandtech last month: > http://www.anandtech.com/showdoc.aspx?i=3532 - SSD versus Enterprise > SAS and SATA disks (20/3/09) > http://www.anandtech.com/storage/showdoc.aspx?i=3531 - The SSD > Anthology: Understanding SSDs and New Drives from OCZ (18/3/09) > > And it looks like the Intel fragmentation issue is fixed as well: http://techreport.com/discussions.x/16739 > > It''s a shame the Sun Writezilla devices are almost 10k USD - seems > there are the only units "on the market" that (apart from cost) work > "well all around" as slog devices - form factor, interface/drivers > and performance.What about the new flash drives Andy was showing off in Vegas? Those looked small (capacity) - perhaps "cheap" too?> > How much of an issue does the random write bandwidth limit have on a > slog device? What about latency? I would have thought the write > traffic pattern for slog io was more sequential and bursty. > > > Nicholas > _______________________________________________ > storage-discuss mailing list > storage-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/storage-discuss-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090415/2db3a87a/attachment.html>
Richard Elling
2009-Apr-15 15:15 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
Nicholas Lee wrote:> On Tue, Apr 14, 2009 at 5:57 AM, Will Murnane <will.murnane at gmail.com > <mailto:will.murnane at gmail.com>> wrote: > > > > Has anyone done any specific testing with SSD devices and > solaris other than > > the FISHWORKS stuff? Which is better for what - SLC and MLC? > My impression is that the flash controllers make a much bigger > difference than the type of flash inside. You should take a look at > AnandTech''s review of the new OCZ Vertex drives [1], which has a > fairly comprehensive set of benchmarks. I don''t think any of the > products they review are really optimal choices, though; the Intel > X25-E drives look good until you see the price tag, and even they only > do 30-odd MB/s random writes. > > > > Couple excellent articles about SSD from adandtech last month: > http://www.anandtech.com/showdoc.aspx?i=3532 - SSD versus Enterprise > SAS and SATA disks (20/3/09) > http://www.anandtech.com/storage/showdoc.aspx?i=3531 - The SSD > Anthology: Understanding SSDs and New Drives from OCZ (18/3/09) > > And it looks like the Intel fragmentation issue is fixed as > well: http://techreport.com/discussions.x/16739FYI, Intel recently had a new firmware release. IMHO, odds are that this will be as common as HDD firmware releases, at least for the next few years. http://news.cnet.com/8301-13924_3-10218245-64.html?tag=mncol> > It''s a shame the Sun Writezilla devices are almost 10k USD - seems > there are the only units "on the market" that (apart from cost) work > "well all around" as slog devices - form factor, interface/drivers and > performance.Sun OEMs all disks, including SSDs.> > How much of an issue does the random write bandwidth limit have on a > slog device? What about latency? I would have thought the write > traffic pattern for slog io was more sequential and bursty.Bandwidth? Almost none. It is a latency play and pretty much any modern system has enough bandwidth. As the old saying goes, money can buy bandwidth, latency requires bribing god. -- richard
Greg Mason
2009-Apr-15 15:32 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
>> And it looks like the Intel fragmentation issue is fixed as well: >> http://techreport.com/discussions.x/16739 > > FYI, Intel recently had a new firmware release. IMHO, odds are that > this will be as common as HDD firmware releases, at least for the > next few years. > http://news.cnet.com/8301-13924_3-10218245-64.html?tag=mncolIt should also be noted that the Intel X25-M != the Intel X25-E. The X25-E hasn''t had any of the performance and fragmentation issues. The X25-E is an SLC SSD, the X25-M is an MLC SSD, hence the more complex firmware. -Greg
Nicholas Lee
2009-Apr-15 22:39 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
On Thu, Apr 16, 2009 at 3:32 AM, Greg Mason <gmason at msu.edu> wrote:> > And it looks like the Intel fragmentation issue is fixed as well: >>> http://techreport.com/discussions.x/16739 >>> >> >> FYI, Intel recently had a new firmware release. IMHO, odds are that >> this will be as common as HDD firmware releases, at least for the >> next few years. >> http://news.cnet.com/8301-13924_3-10218245-64.html?tag=mncol >> > > It should also be noted that the Intel X25-M != the Intel X25-E. The X25-E > hasn''t had any of the performance and fragmentation issues. > > The X25-E is an SLC SSD, the X25-M is an MLC SSD, hence the more complex > firmware. >Yeah, that''s what I understood. I assume based one what I''ve read and price on the sun site (11k NZD for 100G Readzilla 17.5k NZD for 18GB Logzilla) and that the Readizalla''s a MLC device (need''s more space) and the Logzilla is a SLC (latency and write speed requirements). Given latency is the biggest requirement, I''m wondering it would matter between the X25-E and X25-M for a slog. From the reviews I''ve read the latency seems to be pretty similar between the units. Of course size is not as important for a slog, and given the price for the 80GB -m and the 30GB -E is very similar it probably would be better to get the "enterprise" -E. Another question I''m considering is reliability of SSD units and the slog. Should a pair of X25-E be mirrored, or given that the pool can be booted without a slog device is it better to stripe the slog. Which leads into a second question about slogs - according to [1] as the stripe size increases for a pool the effectiveness of a SSD slog reduces. Is still the case? [1] http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090416/c67a220a/attachment.html>
Richard Elling
2009-Apr-15 23:28 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
Nicholas Lee wrote:> > > On Thu, Apr 16, 2009 at 3:32 AM, Greg Mason <gmason at msu.edu > <mailto:gmason at msu.edu>> wrote: > > > And it looks like the Intel fragmentation issue is fixed > as well: > http://techreport.com/discussions.x/16739 > > > FYI, Intel recently had a new firmware release. IMHO, odds are > that > this will be as common as HDD firmware releases, at least for the > next few years. > http://news.cnet.com/8301-13924_3-10218245-64.html?tag=mncol > > > It should also be noted that the Intel X25-M != the Intel X25-E. > The X25-E hasn''t had any of the performance and fragmentation issues. > > The X25-E is an SLC SSD, the X25-M is an MLC SSD, hence the more > complex firmware. > > > Yeah, that''s what I understood. I assume based one what I''ve read and > price on the sun site (11k NZD for 100G Readzilla 17.5k NZD for 18GB > Logzilla) and that the Readizalla''s a MLC device (need''s more space) > and the Logzilla is a SLC (latency and write speed requirements). > > > Given latency is the biggest requirement, I''m wondering it would > matter between the X25-E and X25-M for a slog. From the reviews I''ve > read the latency seems to be pretty similar between the units. Of > course size is not as important for a slog, and given the price for > the 80GB -m and the 30GB -E is very similar it probably would be > better to get the "enterprise" -E.As for space, 18GBytes is much, much larger than 99.9+% of workloads require for slog space. Most measurements I''ve seen indicate that 100 MBytes will be quite satisfactory for most folks. Unfortunately, there is no market for small disk drives -- HDD or SDD.> > Another question I''m considering is reliability of SSD units and the > slog. Should a pair of X25-E be mirrored, or given that the pool can > be booted without a slog device is it better to stripe the slog. > Which leads into a second question about slogs - according to [1] as > the stripe size increases for a pool the effectiveness of a SSD slog > reduces. Is still the case?If you are paranoid, then mirror. For most failure modes, it will be ok to not mirror. -- richard
Nicholas Lee
2009-Apr-16 00:11 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
On Thu, Apr 16, 2009 at 11:28 AM, Richard Elling <richard.elling at gmail.com>wrote:> As for space, 18GBytes is much, much larger than 99.9+% of workloads > require for slog space. Most measurements I''ve seen indicate that 100 > MBytes > will be quite satisfactory for most folks. Unfortunately, there is no > market > for small disk drives -- HDD or SDD.Let me see if I understand this: A SSD slog can handle, say, 5000 (4k) transactions in a sec (20M/s) vs maybe 300 (4k) iops for a single HDD. The slog can then batch and dump say 30s worth of transactions - 600M as one sequential write - say 15s (at 40M/s) for a single HDD. So maybe at most 2GB is needed for 60s worth of traffic. And this is the big win for sync NFS - .2ms latency vs 3+ms latency. For 4k NFS blocks at 3ms latency difference that is 10,000/4*3/1000 = 7.5 sec of extra latency for a 10M file. Ignoring all other latency issues. Not sure what the iops for a SSD is at 32k. But assuming, 3000 iops at 30k means 96M/s or 5GB for 60s. So for a 10MB file the latency difference at 32k is only 1 sec. Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090416/c0a5ccf2/attachment.html>
Nicholas Lee
2009-Apr-16 00:21 UTC
[zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?
On Thu, Apr 16, 2009 at 12:11 PM, Nicholas Lee <emptysands at gmail.com> wrote:> > Let me see if I understand this: A SSD slog can handle, say, 5000 (4k) > transactions in a sec (20M/s) vs maybe 300 (4k) iops for a single HDD. The > slog can then batch and dump say 30s worth of transactions - 600M as > one sequential write - say 15s (at 40M/s) for a single HDD. So maybe at most > 2GB is needed for 60s worth of traffic. > > And this is the big win for sync NFS - .2ms latency vs 3+ms latency. For 4k > NFS blocks at 3ms latency difference that is 10,000/4*3/1000 = 7.5 sec of > extra latency for a 10M file. Ignoring all other latency issues. > > Not sure what the iops for a SSD is at 32k. But assuming, 3000 iops at 30k > means 96M/s or 5GB for 60s. So for a 10MB file the latency difference at > 32k is only 1 sec. >Actually my numbers might be a little out: http://www.anandtech.com/cpuchipsets/Intel/showdoc.aspx?i=3403&p=8 For the X25-M (-E is probably better - The X25-M is rate internally limited to 80M/s), the random iops where: - At 4k .089ms or 11.2k iops = 45M/s - At 16k .23ms or 4.3k iops = 69M/s - At 32k .44ms or 2.3k iops = 74M/s - At 64k .84ms or 1.2k iops = 77M/s - At 128k 1.73ms or .6k iops = 77M/s They use a Momentus 7200.2 at 100 iops (regardless of packet size), a better drive would should get 300 iops. Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090416/4ca283a5/attachment.html>