Hi there Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario). The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID. Has anybody with access to these units done a benchmark comparing the performance (and with the pricelist in hand) came to a conclusion. It''s not as such about maximum performance from both, but the price/performance. If a JBOD with ZFS does 500 IO''s @ $10K vs a HW RAID 700 IO''s @ $20K ... then the JBOD would be a good investment when $ is a factor. (example) Thank you. This message posted from opensolaris.org
On Fri, 28 Jul 2006, Louwtjie Burger wrote: .... reformatted ....> Hi there > > Is it fair to compare the 2 solutions using Solaris 10 U2 and a > commercial database (SAP SD scenario). > > The cache on the HW raid helps, and the CPU load is less... but the > solution costs more and you _might_ not need the performance of the HW > RAID. > > Has anybody with access to these units done a benchmark comparing the > performance (and with the pricelist in hand) came to a conclusion. > > It''s not as such about maximum performance from both, but the > price/performance. If a JBOD with ZFS does 500 IO''s @ $10K vs a HW RAID > 700 IO''s @ $20K ... then the JBOD would be a good investment when $ is a > factor. (example)Or stated another way, is it more beneficial to spend the $10k on increased memory for the DB server, rather than on RAID hardware. Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005 OpenSolaris Governing Board (OGB) Member - Feb 2006
On July 28, 2006 3:31:51 AM -0700 Louwtjie Burger <zabermeister at gmail.com> wrote:> Hi there > > Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD > scenario). > > The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you > _might_ not need the performance of the HW RAID. > > Has anybody with access to these units done a benchmark comparing the performance (and with the > pricelist in hand) came to a conclusion. > > It''s not as such about maximum performance from both, but the price/performance. If a JBOD with > ZFS does 500 IO''s @ $10K vs a HW RAID 700 IO''s @ $20K ... then the JBOD would be a good > investment when $ is a factor. (example)ISTM the cheapest array is the best for zfs. If not, isn''t any benchmark going to be specific to your application? -frank
Frank Cusack wrote:> On July 28, 2006 3:31:51 AM -0700 Louwtjie Burger > <zabermeister at gmail.com> wrote: >> Hi there >> >> Is it fair to compare the 2 solutions using Solaris 10 U2 and a >> commercial database (SAP SD >> scenario). >> >> The cache on the HW raid helps, and the CPU load is less... but the >> solution costs more and you >> _might_ not need the performance of the HW RAID. >> >> Has anybody with access to these units done a benchmark comparing the >> performance (and with the >> pricelist in hand) came to a conclusion. >> >> It''s not as such about maximum performance from both, but the >> price/performance. If a JBOD with >> ZFS does 500 IO''s @ $10K vs a HW RAID 700 IO''s @ $20K ... then the >> JBOD would be a good >> investment when $ is a factor. (example) > > ISTM the cheapest array is the best for zfs. If not, isn''t any benchmark > going to be specific to your application?Specific to the app, the amount of data, how many other hosts might be in play, etc. etc. etc. That said a 3510 with a raid controller is going to blow the door, drive brackets, and skin off a JBOD in raw performance.
Torrey, On 7/28/06 10:11 AM, "Torrey McMahon" <Torrey.McMahon at Sun.COM> wrote:> That said a 3510 with a raid controller is going to blow the door, drive > brackets, and skin off a JBOD in raw performance.I''m pretty certain this is not the case. If you need sequential bandwidth, each 3510 only brings 200MB/s x two Fibre channel attach = 400MB/s total. Cheap internal disks in the X4500 reach 2,000MB/s and 2500 random seeks/second using ZFS. General purpose CPUs have reached high enough speeds that they blow cheap RAID CPUs away with good software RAID. - Luke
Luke Lonergan wrote:> Torrey, > > On 7/28/06 10:11 AM, "Torrey McMahon" <Torrey.McMahon at Sun.COM> wrote: > > >> That said a 3510 with a raid controller is going to blow the door, drive >> brackets, and skin off a JBOD in raw performance. >> > > I''m pretty certain this is not the case. > > If you need sequential bandwidth, each 3510 only brings 200MB/s x two Fibre > channel attach = 400MB/s total.You might want to check the specs of the the 3510. In some configs you only get 2 ports. However, in others you can get 8. In any case though throughput is important the amount of iops you can load the controller or drives is often more important. Yes, in a highly sequential workload you''re going to blow right through the cache and hit the drives - If the array is smart which more are these days - but those highly sequential workloads are not found as often as others.> Cheap internal disks in the X4500 reach > 2,000MB/s and 2500 random seeks/second using ZFS. >You''re comparing apples to a crate of apples. A more useful comparison would be something along the lines a single R0 LUN on a 3510 with controller to a single 3510-JBOD with ZFS across all the drives.
Torrey,> -----Original Message----- > From: Torrey.McMahon at Sun.COM [mailto:Torrey.McMahon at Sun.COM] > Sent: Monday, July 31, 2006 8:32 PM > > You might want to check the specs of the the 3510. In some > configs you > only get 2 ports. However, in others you can get 8.Really? 8 active Fibre Channel ports? Can you post the link?> In any > case though > throughput is important the amount of iops you can load the > controller > or drives is often more important.Thus my posted benchmark of 2500 seeks per second. Got any results on a 3510 with HW RAID? Please post Bonnie++ version 1.03 results here.> Yes, in a highly > sequential workload > you''re going to blow right through the cache and hit the > drives - If the > array is smart which more are these days - but those highly > sequential > workloads are not found as often as others.Business Intelligence is one. See the 2,500 seeks per second above.> You''re comparing apples to a crate of apples. A more useful > comparison > would be something along the lines a single R0 LUN on a 3510 with > controller to a single 3510-JBOD with ZFS across all the drives.My point is that the 3510 with two active Fibre channel connections is going to be channel limited on sequential access by over 3 to 1 (14 drives x 80MB/s per drive compared to 2 x 200MB/s Fibre Channel), which sux no matter how you slice it. WRT the random access performance, the 3510 with and without HW RAID might be an interesting test. I would expect the HW RAID to outperform by a bit there because of the closer proximity of the CPU to the I/O channels. If you think of the X4500 as a whomping RAID controller though, it''s more of a fair comparison to take 14 drives in a X4500 and compare 14 drives in a 3510. I think you''d be surprised there. - Luke
On July 31, 2006 11:32:15 PM -0400 Torrey McMahon <Torrey.McMahon at Sun.COM> wrote:> > You''re comparing apples to a crate of apples. A more useful comparison would be something along > the lines a single R0 LUN on a 3510 with controller to a single 3510-JBOD with ZFS across all the > drives. >I think the correct comparison is price/performance. Isn''t thumper about the same cost as a 3510-raid? -frank
Luke Lonergan wrote:> Torrey, > > >> -----Original Message----- >> From: Torrey.McMahon at Sun.COM [mailto:Torrey.McMahon at Sun.COM] >> Sent: Monday, July 31, 2006 8:32 PM >> >> You might want to check the specs of the the 3510. In some >> configs you >> only get 2 ports. However, in others you can get 8. >> > > Really? 8 active Fibre Channel ports? Can you post the link? >http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml Look at the specs page.
Frank Cusack wrote:> On July 31, 2006 11:32:15 PM -0400 Torrey McMahon > <Torrey.McMahon at Sun.COM> wrote: >> >> You''re comparing apples to a crate of apples. A more useful >> comparison would be something along >> the lines a single R0 LUN on a 3510 with controller to a single >> 3510-JBOD with ZFS across all the >> drives. >> > > I think the correct comparison is price/performance. Isn''t thumper about > the same cost as a 3510-raid?The correct comparison is done when all the factors are taken into account. Making blanket statements like, "ZFS & JBODs are always ideal" or "ZFS on top of a raid controller is a bad idea" or "SATA drives are good enough" without taking into account the amount of data, access patterns, numbers of hosts, price, performance, data retention policies, audit requirements ... is where I take issue. That said I''m afraid I''m not a sales person so I can''t compare pricing for you. A 3511 with some expansion trays might be a better comparison. You could look at the sun.com prices but ... ehh ... those are list price and we all know how that works, right? ;)
On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote:> > The correct comparison is done when all the factors are taken into > account. Making blanket statements like, "ZFS & JBODs are always ideal" > or "ZFS on top of a raid controller is a bad idea" or "SATA drives are > good enough" without taking into account the amount of data, access > patterns, numbers of hosts, price, performance, data retention policies, > audit requirements ... is where I take issue. >Then how are blanket statements like: "That said a 3510 with a raid controller is going to blow the door, drive brackets, and skin off a JBOD in raw performance." Not offensive as well? - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Eric Schrock wrote:> On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote: > >> The correct comparison is done when all the factors are taken into >> account. Making blanket statements like, "ZFS & JBODs are always ideal" >> or "ZFS on top of a raid controller is a bad idea" or "SATA drives are >> good enough" without taking into account the amount of data, access >> patterns, numbers of hosts, price, performance, data retention policies, >> audit requirements ... is where I take issue. >> >> > > Then how are blanket statements like: > > "That said a 3510 with a raid controller is going to blow the > door, drive brackets, and skin off a JBOD in raw performance." > > Not offensive as well? > >OK. I''ll track down some gear...if I can. I figured that one was just kind of obvious.
(I hate when I hit the Send button when trying to change windows....) Eric Schrock wrote:> On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote: > >> The correct comparison is done when all the factors are taken into >> account. Making blanket statements like, "ZFS & JBODs are always ideal" >> or "ZFS on top of a raid controller is a bad idea" or "SATA drives are >> good enough" without taking into account the amount of data, access >> patterns, numbers of hosts, price, performance, data retention policies, >> audit requirements ... is where I take issue. >> >> > > Then how are blanket statements like: > > "That said a 3510 with a raid controller is going to blow the > door, drive brackets, and skin off a JBOD in raw performance." > > Not offensive as well? > >Who said anything about offensive? I just said I take issue such statements in the general sense of trying to compare boxes to boxes or when making blanket statements such as "X always works better on Y". The specific question was around a 3510JBOD having better performance then a 3510 with a raid controller. Thats where I said the raid controller performance was going to be better. > ISTM the cheapest array is the best for zfs. If not, isn''t > any benchmark going to be specific to your application? Specific to the app, the amount of data, how many other hosts might be in play, etc. etc. etc. That said a 3510 with a raid controller is going to blow the door, drive brackets, and skin off a JBOD in raw performance. But OK. I''ll track down some gear...if I can. I figured that one was just kind of obvious.
On Aug 1, 2006, at 14:18, Torrey McMahon wrote:> (I hate when I hit the Send button when trying to change windows....) > > Eric Schrock wrote: >> On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote: >> >>> The correct comparison is done when all the factors are taken >>> into account. Making blanket statements like, "ZFS & JBODs are >>> always ideal" or "ZFS on top of a raid controller is a bad idea" >>> or "SATA drives are good enough" without taking into account the >>> amount of data, access patterns, numbers of hosts, price, >>> performance, data retention policies, audit requirements ... is >>> where I take issue. >>> >>> >> >> Then how are blanket statements like: >> >> "That said a 3510 with a raid controller is going to blow the >> door, drive brackets, and skin off a JBOD in raw performance." >> >> Not offensive as well? >> >> > > > Who said anything about offensive? I just said I take issue such > statements in the general sense of trying to compare boxes to boxes > or when making blanket statements such as "X always works better on > Y". > > The specific question was around a 3510JBOD having better > performance then a 3510 with a raid controller. Thats where I said > the raid controller performance was going to be better.just to be clear .. we''re talking about a 3510 JBOD with ZFS (i guess you could run pass through on the controller or just fail the batteries on the cache) vs a 3510 with the raid controller turned on .. I''d tend to agree with Torrey, mainly since well designed RAID controllers will generally do a better job with their own back-end on aligning I/O for efficient full-stripe commits .. without battery backed memory on the host, CoW is still going to need synchronous I/O somewhere for guaranteed writes - and there''s a fraction of your gain. Don''t get me wrong .. CoW is key for a lot of the cool features and amazing functionality in ZFS and I like it .. it''s just not generally considered a high performance I/O technique for many cases when we''re talking about committing bits to spinning rust. And while it may be great for asynchronous behaviour, unless we want to reveal some amazing discovery that reverses years of I/O development - it seems to me that when we fall to synchronous behaviour the invalidation of the filesystem''s page cache will always play a factor in the overall reduction of throughput. OK .. I can see that we can eliminate the read/modify/write penalty and write hole problem at the storage layer .. but so does "battery backed" array cache with the real limiting factor ultimately being the latency between the cache through the back-end loops to the spinning disk. (I would argue that low cache latency and under-saturated drive channels matter more than the sheer amount of coherent cache) Speaking in high generalities, the problem almost always works it''s way down to how well an array solution balances properly aligned I/O with the response time between cache across the back-end loops to the spindles and any inherent latency there or in between. OK .. I can see that ZFS is a nice arbitrator and is working it''s way into some of the drive mechanics, but there is still some reliance on the driver stack for determining the proper transport saturation and back- off. And great - we''re making more inroads with transaction groups and an intent log that''s wonderful .. and we''ve done a lot of cool things along the way .. maybe by the time we''re done we can move the code to a minimized Solaris build on dedicated hardware .. and build an array solution (with a built in filesystem) .. that''s big .. and round .. and rolls fast .. and then we can call it .. (thump thump thump) .. the zwheel :) --- .je
Torrey, On 8/1/06 10:30 AM, "Torrey McMahon" <Torrey.McMahon at Sun.COM> wrote:> http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml > > Look at the specs page.I did. This is 8 trays, each with 14 disks and two active Fibre channel attachments. That means that 14 disks, each with a platter rate of 80MB/s will be driven over a 400MB/s pair of Fibre Channel connections, a slowdown of almost 3 to 1. This is probably the most expensive, least efficient way to get disk bandwidth available to customers. WRT the discussion about "blow the doors", etc., how about we see some bonnie++ numbers to back it up. - Luke
Luke Lonergan wrote:> Torrey, > > On 8/1/06 10:30 AM, "Torrey McMahon" <Torrey.McMahon at Sun.COM> wrote: > > >> http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml >> >> Look at the specs page. >> > > I did. > > This is 8 trays, each with 14 disks and two active Fibre channel > attachments. > > That means that 14 disks, each with a platter rate of 80MB/s will be driven > over a 400MB/s pair of Fibre Channel connections, a slowdown of almost 3 to > 1. > > This is probably the most expensive, least efficient way to get disk > bandwidth available to customers. >Luke - I think you have latched on to a comparison of Thumper to a 3510. With the exception of my note concerning blanket blanket statements and assumptions I''ve been referring to the original question and subject of comparing the performance of 3510 JBOD to 3510RAID.
On Aug 1, 2006, at 22:23, Luke Lonergan wrote:> Torrey, > > On 8/1/06 10:30 AM, "Torrey McMahon" <Torrey.McMahon at Sun.COM> wrote: > >> http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml >> >> Look at the specs page. > > I did. > > This is 8 trays, each with 14 disks and two active Fibre channel > attachments. > > That means that 14 disks, each with a platter rate of 80MB/s will > be driven > over a 400MB/s pair of Fibre Channel connections, a slowdown of > almost 3 to > 1. > > This is probably the most expensive, least efficient way to get disk > bandwidth available to customers. > > WRT the discussion about "blow the doors", etc., how about we see some > bonnie++ numbers to back it up. >actually .. there''s SPC-2 vdbench numbers out at: http://www.storageperformance.org/results see the full disclosure report here: http://www.storageperformance.org/results/b00005_Sun_SPC2_full- disclosure_r1.pdf of course that''s a 36GB 15K FC system with 2 expansion trays, 4HBAs and 3 yrs maintenance in the quote that was spec''d at $72K list (or $56/GB) .. (i''ll use list numbers for comparison since they''re the easiest ) if you''ve got a copy of the vdbench tool you might want to try the profiles in the appendix on a thumper - I believe the bonnie/bonnie++ numbers tend to skew more on single threaded low blocksize memory transfer issues. now to bring the thread full circle to the original question of price/ performance and increasing the scope to include the X4500 .. for single attached low cost systems, thumper is *very* compelling particularly when you factor in the density .. for example using list prices from http://store.sun.com/ X4500 (thumper) w/ 48 x 250GB SATA drives = $32995 = $2.68/GB X4500 (thumper) w/ 48 x 500GB SATA drives = $69995 = $2.84/GB SE3511 (dual controller) w/ 12 x 500GB SATA drives = $36995 = $6.17/GB SE3510 (dual controller) w/ 12 x 300GB FC drives = $48995 = $13.61/GB So a 250GB SATA drive configured thumper (server attached with 16GB of cache .. err .. RAM) is 5x less in cost/GB than a 300GB FC drive configured 3510 (dual controllers w/ 2 x 1GB typically mirrored cache) and a 500GB SATA drive configured thumper (server attached) is 2.3x less in cost/GB than a 500GB SATA drive configured 3511 (again dual controllers w/ 2 x 1GB typically mirrored cache) For a single attached system - you''re right - 400MB/s is your effective throttle (controller speeds actually) on the 3510 and your realistic throughput on the 3511 is probably going to be less than 1/2 that number if we factor in the back pressure we''ll get on the cache against the back loop .. your bonnie ++ block transfer numbers on a 36 drive thumper were showing about 424MB/s on 100% write and about 1435MB/s on 100% read .. it''d be good to see the vdbench numbers as well (but i''ve have a hard time getting my hands on one since most appear to be out at customer sites) Now with thumper - you are SPoF''d on the motherboard and operating system - so you''re not really getting the availability aspect from dual controllers .. but given the value - you could easily buy 2 and still come out ahead .. you''d have to work out some sort of timely replication of transactions between the 2 units and deal with failure cases with something like a cluster framework. Then for multi- initiator cross system access - we''re back to either some sort of NFS or CIFS layer or we could always explore target mode drivers and virtualization .. so once again - there could be a compelling argument coming in that arena as well. Now, if you already have a big shared FC infrastructure - throwing dense servers in the middle of it all may not make the most sense yet - but on the flip side, we could be seeing a shrinking market for single attach low cost arrays. Lastly (for this discussion anyhow) there''s the reliability and quality issues with SATA vs FC drives (bearings, platter materials, tolerances, head skew, etc) .. couple that with the fact that dense systems aren''t so great when they fail .. so I guess we''re right back to choosing the right systems for the right purposes (ZFS does some great things around failure detection and workaround) .. but i think we''ve beat that point to death .. --- .je
Jonathan Edwards wrote:> Now with thumper - you are SPoF''d on the motherboard and operating > system - so you''re not really getting the availability aspect from dual > controllers .. but given the value - you could easily buy 2 and still > come out ahead .. you''d have to work out some sort of timely replication > of transactions between the 2 units and deal with failure cases with > something like a cluster framework.No. Shared data clusters require that both nodes have access to the storage. This is not the case for a thumper, where the disks are not dual-ported and there is no direct access to the disks from an external port. Thumper is not a conventional highly-redundant RAID array. Comparing thumper to a SE3510 on a feature-by-feature basis is truly like comparing apples and oranges. As far as SPOFs go, all systems which provide a single view of data have at least one SPOF. Claiming a RAID array does not have a SPOF is denying truth.> Then for multi-initiator cross > system access - we''re back to either some sort of NFS or CIFS layer or > we could always explore target mode drivers and virtualization .. so > once again - there could be a compelling argument coming in that arena > as well. Now, if you already have a big shared FC infrastructure - > throwing dense servers in the middle of it all may not make the most > sense yet - but on the flip side, we could be seeing a shrinking market > for single attach low cost arrays.From a space perspective, I can put a TByte on my desktop today. Death of the low-end array is assured by bigger drives.> Lastly (for this discussion anyhow) there''s the reliability and quality > issues with SATA vs FC drives (bearings, platter materials, tolerances, > head skew, etc) .. couple that with the fact that dense systems aren''t > so great when they fail .. so I guess we''re right back to choosing the > right systems for the right purposes (ZFS does some great things around > failure detection and workaround) .. but i think we''ve beat that point > to death ..Agree, in principle. However, the protocol used to connect to the host is immaterial to the quality of the device. The market segments determine the quality of the device, and the drive vendors find it in their best interest to keep consumer devices inexpensive at all costs, and achieve higher margins on enterprise class devices. What we''ve done for thumper is to use a top-of-the-line quality SATA drive. AFAIK today, the vendor is Hitachi, though we like to have multiple sources, if they can meet the specifications. Often the vendor and part information is available on the SunSolve Systems Handbook, http://sunsolve.sun.com/handbook_pub/Systems under the Full Components List selection for the specific system. Today, the Sun Fire X4500 is not listed as it has not reached general availability, yet. Look for it soon. So, what is thumper good for? Clearly, it can store a lot of data in a redundant manner (eg. good for retention). GreenPlum, http://www.greenplum.com is building data warehouses with them. Various people are interested in them for streaming media. We don''t really know what else it will be used for, there isn''t much to compare against in the market. What we do know is that it won''t be appropriate for replacing your SE9985 on your ERP system. -- richard
Richard, On 8/2/06 11:37 AM, "Richard Elling" <Richard.Elling at Sun.COM> wrote:>> Now with thumper - you are SPoF''d on the motherboard and operating >> system - so you''re not really getting the availability aspect from dual >> controllers .. but given the value - you could easily buy 2 and still >> come out ahead .. you''d have to work out some sort of timely replication >> of transactions between the 2 units and deal with failure cases with >> something like a cluster framework. > > No. Shared data clusters require that both nodes have access to the > storage. This is not the case for a thumper, where the disks are not > dual-ported and there is no direct access to the disks from an external > port. Thumper is not a conventional highly-redundant RAID array. > Comparing thumper to a SE3510 on a feature-by-feature basis is truly > like comparing apples and oranges.That''s why Thumper DW is a shared nothing fully redundant data warehouse. We replicate the data among systems so that we can lose up to half of the total server count while processing. Basket of Apples >>> one big apple with a worm in it. - Luke
Richard Elling wrote:> Jonathan Edwards wrote: >> Now with thumper - you are SPoF''d on the motherboard and operating >> system - so you''re not really getting the availability aspect from >> dual controllers .. but given the value - you could easily buy 2 and >> still come out ahead .. you''d have to work out some sort of timely >> replication of transactions between the 2 units and deal with failure >> cases with something like a cluster framework. > > No. Shared data clusters require that both nodes have access to the > storage. This is not the case for a thumper, where the disks are not > dual-ported and there is no direct access to the disks from an external > port. Thumper is not a conventional highly-redundant RAID array. > Comparing thumper to a SE3510 on a feature-by-feature basis is truly > like comparing apples and oranges.Apples and pomegranates perhaps? You could drop the iSCSI target on it and share the drives ala zvols. The "what is an array, what is a server, what is both" discussion gets interesting based on the qualities of the thing that holds the disks.> > As far as SPOFs go, all systems which provide a single view of data > have at least one SPOF. Claiming a RAID array does not have a SPOF is > denying truth.Its the amount of SPOFs and the overall reliability that I think Jonathan was referring too. Of course, we''re all systems folks so component failure is always in the back of our mind, right? ;)> > From a space perspective, I can put a TByte on my desktop today. Death > of the low-end array is assured by bigger drives.Its a sliding window. What was midrange ten years ago is low-end or desktop today in the capacity and many cases performance context. Reliability and availability not so much.
On Wed, 2 Aug 2006, Richard Elling wrote:> From a space perspective, I can put a TByte on my desktop today. Death > of the low-end array is assured by bigger drives.I respectfully disagree. I think there will always be a need for low-end arrays, regardless of the size of the individual disks. I like to keep my OS and data/apps separate (on separate drives preferably)--and I doubt I''m alone. Many of todays smaller servers come with only two disks, which is fine for mirroring root and swap, but the only place to put one''s data is on an external array. There are many situations where low-end storage (in terms of numbers of spindles) would be very useful, hence my blog entry a while ago wishing that Sun would produce a 1U, 8-drive SAS array at an affordable price (at least one company has such a product, but I want to buy only Sun HW). -- Rich Teer, SCNA, SCSA, OpenSolaris CAB member President, Rite Online Inc. Voice: +1 (250) 979-1638 URL: http://www.rite-group.com/rich