Ellis, Mike
2007-May-30 04:31 UTC
[zfs-discuss] Re: ZFS - Use h/w raid or not?Thoughts.Considerations.
Hey Richard, thanks for sparking the conversation... This is a very interesting topic.... (especially if you take it out of the HPC "we need 1000 servers to have this minimal boot image" space into general purpose/enterprise computing) -- Based on your earlier note, it appears you''re not planning to use cheapo "free after rebate" CF cards :-) (The cheap-ones would probably be perfect for ZFS a-la cheap-o-JBOD). Having boot disks mirrored across controllers has had sys-admins sleep better over the years (especially in FC-loop-cases with both drives on the same loop... Sigh....). If the USB-bus one might hang these fancy FC-cards on is robust enough then perhaps a single "battle hardened" CF-card will suffice... (although zfs ditto-blocks or some form of protection might still be considered a good thing?).... Having 2 cards would certainly make the "unlikely replacement" of a card a LOT more straight-forward than a single-card failure... Much of this would depend on the quality of these CF-cards and how they put up under load/stress/time.... -- If we''re going down this CF-boot path, many of us are going to have to re-think our boot-environment quite a bit. We''ve been "spoiled" with 36+ GB mirrored-boot drives for some time now.... (if you do a lot of PATCHING, you''ll find that even those can get tight.... But that''s a discussion for a different day) I don''t think most enterprise "boot disk layouts" are going to fit (even unmirrored) onto a single 4GB CF-card. So we''ll have to play some games where we start splitting off /opt, /var, (which is fairly read-write intensive when you have process-accounting etc. running) onto some "other" non-CF filesystem.... (likely a SAN of some variety). At some point the hackery a 4GB CF-card is going to force us to do, is going to become more complex than just biting the bullet and doing a full multipath-ed SAN-boot & calling it a day. (or perhaps some future iSCSI/NFS boot for the SAN-averse) Seriously though... If (say in some HPC/grid space?) you can stick your ENTIRE boot environment onto a 4GB CF-card, why not just do the SAN, NFS/iSCSI boot thing instead? (what ever happened to: http://blogs.sun.com/dweibel/entry/sprint_snw_2006#comments ) -- But lets explore the CF thing some more... There is something there, although I think Sun might have to provide some best-practices/suggestions as to how customers that don''t run a minimum-config-no-local-apps, pacct, monitoring, etc. solaris environment are best to use something like this. Use it as a pivot boot onto the real root-image? That would delegate the CF-card to little more than a "rescue/utility" image.... Kinda cool, but not earth-shattering I would think.... (especially for those already utilizing wanboot for such purposes) -- Splitting off /var and friends from the boot environment (and still packing the boot env say on a ditto-block 4GB FC card) is still going to leave a pretty tight boot env. Obviously you want to be able to do some fancy live-upgrade stuff in this space too, and all of a sudden a single 4GB flash-card "don''t look so big" anymore.... 2 of them, with some ZFS (and compression?) or even SDS mirroring between them would possibly go a long way to make replacement easier, give you redundancy (zfs/sds mirrors), some wiggle-room for live-upgrade scenarios, and who knows what else. Still tight though.... -- If it''s a choice between 1-CF or NONE, we''ll take 1-CF I guess.... Fear of the unknown (and field data showing how these guys hold up over time) would really determine uptake I guess. (( as you said, real data regarding these specialized CF-cards will be required... Is it going to vary greatly from vendor to vendor? Usecase to usecase? I''m not looking forward to blazing the trail here.... Something doesn''t seem right, especially without the safety net of a mirrored environment... But maybe that''s just old-school sys-admin superstition.... Lets get some data, set me straight...)) -- Right now we can stick 4x 4GB memory sticks into a x4200 (creating a cactus looking device :-) A single built-in CF is obviously cleaner/safer, but also somewhat limiting in terms of redundancy or even just capacity. Has anyone considered taking say 2x 4G CF cards, and sticking them inside one of the little sas-drive-enclosures? Customers could purchase upto 4 of those for certain servers, (t2000/x4200 etc.) and treat these as if they were really fast, lower-power/heat, (never fails no need to mirror?) ~9GB drives. In the long-run, is that "easier" and more flexible? -- It would be really interesting to hear how others out there might try to use a CF-boot-option in their environment. Good thread, lets bat this around some more. -- MikeE -----Original Message----- From: Richard.Elling at Sun.COM [mailto:Richard.Elling at Sun.COM] Sent: Tuesday, May 29, 2007 9:48 PM To: Ellis, Mike Cc: Carson Gaspar; zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Re: ZFS - Use h/w raid or not?Thoughts.Considerations. Ellis, Mike wrote:> Also the "unmirrored memory" for the rest of the system has ECC and > ChipKill, which provides at least SOME protection against random > bit-flips.CF devices, at least the ones we''d be interested in, do have ECC as well as spare sectors and write verification. Note: flash memories do not suffer from the same radiation-based bit-flip mechanisms as DRAMS or SRAMS. The main failure mode we worry about is write endurance.> Question: It appears that CF and friends would make a descentlive-boot> (but don''t run on me like I''m a disk) type of boot-media due to the > limited write/re-write limitations of flash-media. (at least the > non-exotic type of flash-media)Where we see current use is for boot devices, which have the expectation of read-mostly workloads. The devices also implement wear leveling.> Would something like future zfs-booting on a pair of CF-devices > reduce/lift that limitation? (does the COW nature of ZFS automatically > spread WRITES across the entire CF device?) [[ is tmp-fs/swap going to > remain a problem till zfs-swap adds some COW leveling to theswap-area?> ]]The belief is that COW file systems which implement checksums and data redundancy (eg, ZFS and the ZFS copies option) will be redundant over CF''s ECC and wear leveling *at the block level.* We believe ZFS will excel in this area, but has limited bootability today. This will become more interesting over time, especially when ZFS boot is ubiquitous. As for swap, it is a good idea if you are sized such that you don''t need to physically use swap. Most servers today are in this category. Actually, most servers today have much more memory than would fit in a reasonably priced CF, so it might be a good idea to swap elsewhere. In other words, it is more difficult to build the (technical) case for redundant CFs for boot than it is for disk drives. Real data would be greatly appreciated. -- richard
Richard Elling
2007-May-31 20:59 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
Hi Mike, more thoughts below... Ellis, Mike wrote:> Hey Richard, thanks for sparking the conversation... This is a very > interesting topic.... (especially if you take it out of the HPC "we need > 1000 servers to have this minimal boot image" space into general > purpose/enterprise computing)CF cards aren''t generally very fast, so the solid state disk vendors are putting them into hard disk form factors with SAS/SATA interfaces. These will be more interesting because they are really fast and can employ more sophisticated data protection methods -- like magnetic disk drives :-)> Based on your earlier note, it appears you''re not planning to use cheapo > "free after rebate" CF cards :-) (The cheap-ones would probably be > perfect for ZFS a-la cheap-o-JBOD).The price of flash memory has dropped by 50% this year. Expect this trend to follow Moore''s law.> Having boot disks mirrored across controllers has had sys-admins sleep > better over the years (especially in FC-loop-cases with both drives on > the same loop... Sigh....). If the USB-bus one might hang these fancy > FC-cards on is robust enough then perhaps a single "battle hardened" > CF-card will suffice... (although zfs ditto-blocks or some form of > protection might still be considered a good thing?).... > > Having 2 cards would certainly make the "unlikely replacement" of a card > a LOT more straight-forward than a single-card failure... Much of this > would depend on the quality of these CF-cards and how they put up under > load/stress/time....Disagree. With two cards, you have to implement software mirroring of some sort. While ZFS is a step in the right direction (simplifying the process) it is unproven for long term system administration. The costs of implementing software mirroring occur in the complexity of managing the software environment over time as upgrades and patches occur. Reliability tends to trump availability for this reason.> -- > > If we''re going down this CF-boot path, many of us are going to have to > re-think our boot-environment quite a bit. We''ve been "spoiled" with 36+ > GB mirrored-boot drives for some time now.... (if you do a lot of > PATCHING, you''ll find that even those can get tight.... But that''s a > discussion for a different day) > > I don''t think most enterprise "boot disk layouts" are going to fit (even > unmirrored) onto a single 4GB CF-card. So we''ll have to play some games > where we start splitting off /opt, /var, (which is fairly read-write > intensive when you have process-accounting etc. running) onto some > "other" non-CF filesystem.... (likely a SAN of some variety). At some > point the hackery a 4GB CF-card is going to force us to do, is going to > become more complex than just biting the bullet and doing a full > multipath-ed SAN-boot & calling it a day. (or perhaps some future > iSCSI/NFS boot for the SAN-averse)4 GBytes is possible, but 8 GBytes (< $100 today) will be more common. 16 GByte CFs are still above $100... wait a few months. These are often used for the high-end digital cameras, where there is no redundancy, so the photography sites might be a good source of quality evaluations.> Seriously though... If (say in some HPC/grid space?) you can stick your > ENTIRE boot environment onto a 4GB CF-card, why not just do the SAN, > NFS/iSCSI boot thing instead? (what ever happened to: > http://blogs.sun.com/dweibel/entry/sprint_snw_2006#comments )Good question. You can build an NFS service which is much more reliable than a disk, quite easily in fact. Some people get all upset about that, though. N.B. a client only needs the NFS service to be available when an I/O operation is started. Once you boot and have been running for a while, most stuff should be cached in main memory and your reliance on the NFS boot server is reduced. This makes analysis of the reliability of such systems difficult.> -- > > But lets explore the CF thing some more... There is something there, > although I think Sun might have to provide some > best-practices/suggestions as to how customers that don''t run a > minimum-config-no-local-apps, pacct, monitoring, etc. solaris > environment are best to use something like this. Use it as a pivot boot > onto the real root-image? That would delegate the CF-card to little more > than a "rescue/utility" image.... Kinda cool, but not earth-shattering I > would think.... (especially for those already utilizing wanboot for such > purposes)On my list of things to do is measure the actual block reuse patterns. For ZFS, this isn''t really interesting because of the COW. For UFS, we do expect some hot spots. But even then, there is some debate over whether the problems will hit in metadata first (file appends do not rewrite original data, so logs aren''t interesting). Since UFS metadata is not redundant (unlike ZFS) the issues may get tricky. Somewhere on my list of things to do... and it isn''t a trivial data collection exercise.> -- > > Splitting off /var and friends from the boot environment (and still > packing the boot env say on a ditto-block 4GB FC card) is still going to > leave a pretty tight boot env. Obviously you want to be able to do some > fancy live-upgrade stuff in this space too, and all of a sudden a single > 4GB flash-card "don''t look so big" anymore....Files in /var don''t overwrite data very much. The worry is metadata and application-specific uses such as database logs.> 2 of them, with some ZFS (and compression?) or even SDS mirroring > between them would possibly go a long way to make replacement easier, > give you redundancy (zfs/sds mirrors), some wiggle-room for live-upgrade > scenarios, and who knows what else. Still tight though.... > > -- > > If it''s a choice between 1-CF or NONE, we''ll take 1-CF I guess.... Fear > of the unknown (and field data showing how these guys hold up over time) > would really determine uptake I guess. (( as you said, real data > regarding these specialized CF-cards will be required... Is it going to > vary greatly from vendor to vendor? Usecase to usecase? I''m not looking > forward to blazing the trail here.... Something doesn''t seem right, > especially without the safety net of a mirrored environment... But maybe > that''s just old-school sys-admin superstition.... Lets get some data, > set me straight...))You should be skeptical. USB flash drives have a poor reputation, and the file systems used on them are especially sensitive to unplanned unplugging. Time will tell, but I''ll bet that the newborns of today won''t remember what a magnetic disk drive is.> -- > > Right now we can stick 4x 4GB memory sticks into a x4200 (creating a > cactus looking device :-) A single built-in CF is obviously > cleaner/safer, but also somewhat limiting in terms of redundancy or even > just capacity. > > Has anyone considered taking say 2x 4G CF cards, and sticking them > inside one of the little sas-drive-enclosures? Customers could purchase > upto 4 of those for certain servers, (t2000/x4200 etc.) and treat these > as if they were really fast, lower-power/heat, (never fails no need to > mirror?) ~9GB drives. In the long-run, is that "easier" and more > flexible?2.5" 8GByte solid state SAS drives are running around $250... go for it!> -- > > It would be really interesting to hear how others out there might try to > use a CF-boot-option in their environment.me too :-)> Good thread, lets bat this around some more.-- richard
Richard Elling
2007-May-31 21:29 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
Richard Elling wrote:> CF cards aren''t generally very fast, so the solid state disk vendors are > putting them into hard disk form factors with SAS/SATA interfaces.Timing is everything... a new standard might help... let''s call it "miCard" http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=199703805 -- richard
Frank Cusack
2007-Jun-01 02:42 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
On May 31, 2007 1:59:04 PM -0700 Richard Elling <Richard.Elling at Sun.COM> wrote:> CF cards aren''t generally very fast, so the solid state disk vendors are > putting them into hard disk form factors with SAS/SATA interfaces. TheseIf CF cards aren''t fast, how will putting them into a different form factor make them faster? -frank
Bart Smaalders
2007-Jun-01 04:37 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
Frank Cusack wrote:> On May 31, 2007 1:59:04 PM -0700 Richard Elling <Richard.Elling at Sun.COM> > wrote: >> CF cards aren''t generally very fast, so the solid state disk vendors are >> putting them into hard disk form factors with SAS/SATA interfaces. These > > If CF cards aren''t fast, how will putting them into a different form > factor make them faster?Well, if I were doing that I''d use DRAM and provide enough on-board capacitance and a small processor to copy the contents of the DRAM to flash on power failure. - Bart -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts
Robert Milkowski
2007-Jun-01 11:09 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
Hello Richard, Thursday, May 31, 2007, 10:59:04 PM, you wrote:>> >> Having 2 cards would certainly make the "unlikely replacement" of a card >> a LOT more straight-forward than a single-card failure... Much of this >> would depend on the quality of these CF-cards and how they put up under >> load/stress/time....RE> Disagree. With two cards, you have to implement software mirroring of RE> some sort. While ZFS is a step in the right direction (simplifying the RE> process) it is unproven for long term system administration. The costs RE> of implementing software mirroring occur in the complexity of managing RE> the software environment over time as upgrades and patches occur. RE> Reliability tends to trump availability for this reason. I don''t know - I''ve been using SVM to mirror boot disks for years on several servers and I belive management is better than dealing with different PCI RAID cards, different BIOS''es (on-board RAID), different tools, different failure scenarios, etc. Or maybe you were thinking no-raid-at-all vs. mirror... -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Richard Elling
2007-Jun-01 16:44 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
Frank Cusack wrote:> On May 31, 2007 1:59:04 PM -0700 Richard Elling <Richard.Elling at Sun.COM> > wrote: >> CF cards aren''t generally very fast, so the solid state disk vendors are >> putting them into hard disk form factors with SAS/SATA interfaces. These > > If CF cards aren''t fast, how will putting them into a different form > factor make them faster?Semiconductor memories are accessed in parallel. Spinning disks are accessed serially. Let''s take a look at a few examples and see what this looks like... Disk iops bw atime MTBF UER endurance ---------------------------------------------------------------------------------- SanDisk 32 GByte 2.5" SATA 7,450 67 0.11 2,000,000 10^-20 ? SiliconSystems 8 GByte CF 500 8 2 4,000,000 10^-14 >2,000,000 SanDisk 8 GByte CF ? 40 ? ? ? ? Seagate 146 GByte 2.5" SATA 141 41-63 4.1+ 1,400,000 10^-15 - Hitachi 500 GByte 3.5" SATA 79 31-65 8.5+ 1,000,000 10^-14 - iops = small, random read iops (I/O operations per second) [higher is better] bw = sustained media read bandwidth (MBytes/s) [higher is better] atime = access time (milliseconds) [lower is better] MTBF = mean time between failures (hours) [higher is better] UER = unrecoverable read error rate (errors/bits read) [lower is better] endurance = single block rewrite count [higher is better] http://www.sandisk.com/Assets/File/pdf/oem/SanDisk_SSD_SATA_5000_2.5_DS_P03_DS.pdf http://www.storagesearch.com/ssd-16.html http://www.hitachigst.com/portal/site/en/menuitem.eb9838d4792cb564c0f85074eac4f0a0 http://www.seagate.com/docs/pdf/datasheet/disc/ds_savvio_10k_2.pdf It is a little bit frustrating that all vendors have different amounts of data available on their product which is publically available :-(. -- richard
Frank Cusack
2007-Jun-02 03:02 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
On June 1, 2007 9:44:23 AM -0700 Richard Elling <Richard.Elling at Sun.COM> wrote:> Frank Cusack wrote: >> On May 31, 2007 1:59:04 PM -0700 Richard Elling <Richard.Elling at Sun.COM> >> wrote: >>> CF cards aren''t generally very fast, so the solid state disk vendors are >>> putting them into hard disk form factors with SAS/SATA interfaces. >>> These >> >> If CF cards aren''t fast, how will putting them into a different form >> factor make them faster? > > Semiconductor memories are accessed in parallel. Spinning disks are > accessed > serially. Let''s take a look at a few examples and see what this looks > like... > > Disk iops bw atime MTBF UER > endurance > ------------------------------------------------------------------------- > --------- > SanDisk 32 GByte 2.5" SATA 7,450 67 0.11 2,000,000 10^-20 ? > SiliconSystems 8 GByte CF 500 8 2 4,000,000 10^-14 > >2,000,000... these are probably different technologies though? if cf cards aren''t generally fast, then the sata device isn''t a cf card just with a different form factor. or is the CF interface the limiting factor? also, isn''t CF write very slow (relative to read)? if so, you should really show read vs write iops. -frank
Chris Csanady
2007-Jun-02 04:00 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
On 6/1/07, Frank Cusack <fcusack at fcusack.com> wrote:> On June 1, 2007 9:44:23 AM -0700 Richard Elling <Richard.Elling at Sun.COM> > wrote: > [...] > > Semiconductor memories are accessed in parallel. Spinning disks are > > accessed > > serially. Let''s take a look at a few examples and see what this looks > > like... > > > > Disk iops bw atime MTBF UER > > endurance > > ------------------------------------------------------------------------- > > --------- > > SanDisk 32 GByte 2.5" SATA 7,450 67 0.11 2,000,000 10^-20 ? > > SiliconSystems 8 GByte CF 500 8 2 4,000,000 10^-14 > > >2,000,000 > ... > > these are probably different technologies though? if cf cards aren''t > generally fast, then the sata device isn''t a cf card just with a > different form factor. or is the CF interface the limiting factor? > > also, isn''t CF write very slow (relative to read)? if so, you should > really show read vs write iops.Most vendors don''t list this, for obvious reasons. SanDisk is honest enough to do so though, and the number is spectacularly bad: 15. Chris
Richard Elling
2007-Jun-02 14:42 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
Chris Csanady wrote:> On 6/1/07, Frank Cusack <fcusack at fcusack.com> wrote: >> On June 1, 2007 9:44:23 AM -0700 Richard Elling <Richard.Elling at Sun.COM> >> wrote: >> [...] >> > Semiconductor memories are accessed in parallel. Spinning disks are >> > accessed >> > serially. Let''s take a look at a few examples and see what this looks >> > like... >> > >> > Disk iops bw atime MTBF UER >> > endurance >> > >> ------------------------------------------------------------------------- >> > --------- >> > SanDisk 32 GByte 2.5" SATA 7,450 67 0.11 2,000,000 >> 10^-20 ? >> > SiliconSystems 8 GByte CF 500 8 2 4,000,000 10^-14 >> > >2,000,000 >> ... >> >> these are probably different technologies though? if cf cards aren''t >> generally fast, then the sata device isn''t a cf card just with a >> different form factor. or is the CF interface the limiting factor? >> >> also, isn''t CF write very slow (relative to read)? if so, you should >> really show read vs write iops. > > Most vendors don''t list this, for obvious reasons. SanDisk is honest > enough to do so though, and the number is spectacularly bad: 15.For the SanDisk 32 GByte 2.5" SATA, write bandwidth is 47 MBytes/s -- quite respectable. For the SiliconSystems 8 GByte CF, write bandwidth is 6 MBytes/s -- not so good. -- richard
Chris Csanady
2007-Jun-02 15:34 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
On 6/2/07, Richard Elling <Richard.Elling at sun.com> wrote:> Chris Csanady wrote: > > On 6/1/07, Frank Cusack <fcusack at fcusack.com> wrote: > >> On June 1, 2007 9:44:23 AM -0700 Richard Elling <Richard.Elling at Sun.COM> > >> wrote: > >> [...] > >> > Semiconductor memories are accessed in parallel. Spinning disks are > >> > accessed > >> > serially. Let''s take a look at a few examples and see what this looks > >> > like... > >> > > >> > Disk iops bw atime MTBF UER > >> > endurance > >> > > >> ------------------------------------------------------------------------- > >> > --------- > >> > SanDisk 32 GByte 2.5" SATA 7,450 67 0.11 2,000,000 > >> 10^-20 ? > >> > SiliconSystems 8 GByte CF 500 8 2 4,000,000 10^-14 > >> > >2,000,000 > >> ... > >> > >> these are probably different technologies though? if cf cards aren''t > >> generally fast, then the sata device isn''t a cf card just with a > >> different form factor. or is the CF interface the limiting factor? > >> > >> also, isn''t CF write very slow (relative to read)? if so, you should > >> really show read vs write iops. > > > > Most vendors don''t list this, for obvious reasons. SanDisk is honest > > enough to do so though, and the number is spectacularly bad: 15. > > For the SanDisk 32 GByte 2.5" SATA, write bandwidth is 47 MBytes/s -- quite > respectable.I was quoting the random write IOPS number at 4kB. The theoretical sequential write bandwidth is fine, but I don''t think that 15 IOPS can be considered respectable. They also list the number at 512kB, and it is still only 16 IOPS. This is probably an artifact of striping across a large number of flash chips, each of which has a large page size. It is unknown how large a transfer is required to actually reach that respectable sequential write performance, though it probably won''t happen often, if at all. Chris
Richard Elling
2007-Jun-03 14:58 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
Chris Csanady wrote:> I was quoting the random write IOPS number at 4kB. The theoretical > sequential write bandwidth is fine, but I don''t think that 15 IOPS can > be considered respectable.This is where ZFS could be a good thing. ZFS doesn''t generally do small, random writes. If we get 15 random write iops at 128kBytes, then it is more reasonable. -- richard> They also list the number at 512kB, and it is still only 16 IOPS. > This is probably an artifact of striping across a large number of > flash chips, each of which has a large page size. It is unknown how > large a transfer is required to actually reach that respectable > sequential write performance, though it probably won''t happen often, > if at all.They should send me one so that I can experiment :-) -- richard
Al Hopper
2007-Jun-03 15:13 UTC
[zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]
On Sun, 3 Jun 2007, Richard Elling wrote:> Chris Csanady wrote: >> I was quoting the random write IOPS number at 4kB. The theoretical >> sequential write bandwidth is fine, but I don''t think that 15 IOPS can >> be considered respectable. > > This is where ZFS could be a good thing. ZFS doesn''t generally do small, > random writes. If we get 15 random write iops at 128kBytes, then it > is more reasonable.Has anyone been able to get a sample Sandisk to evaluate? Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/