digg linked to an article related to the apple port of ZFS (http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss). I dont have a mac but was interested in ZFS. The article says that ZFS eliminates the need for a RAID card and is faster because the striping is running on the main cpu rather than an old chipset on a card. My question is, is this true? Can I install opensolaris with zfs and stripe and mirror a bunch of sata disc for a home NAS server? I sure would like to do that but the cost of the good raid cards has put me off; maybe this is the solution. This message posted from opensolaris.org
kevin williams writes:> digg linked to an article related to the apple port of ZFS (http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss). I dont have a mac but was interested in ZFS. > > The article says that ZFS eliminates the need for a RAID card and is faster because the striping is running on the main cpu rather than an old chipset on a card. My question is, is this true? Can I install opensolaris with zfs and stripe and mirror a bunch of sata disc for a home NAS server? I sure would like to do that but the cost of the good raid cards has put me off; maybe this is the solution. >The cache may give RAID cards an edge, but ZFS gives near platter speeds for its various configurations. The Thumper is a perfect example of a ZFS appliance. So yes, you can use OpenSolaris for a home NAS server. Ian
kevin williams wrote:> digg linked to an article related to the apple port of ZFS > (http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss). > I dont have a mac but was interested in ZFS. > > The article says that ZFS eliminates the need for a RAID card and is > faster because the striping is running on the main cpu rather than an old > chipset on a card. My question is, is this true? Can I install > opensolaris with zfs and stripe and mirror a bunch of sata disc for a > home NAS server? I sure would like to do that but the cost of the good > raid cards has put me off; maybe this is the solution.Hi Kevin, Personally, I''d argue that if you''ve got a RAID card or array to use, you should take advantage of it _in conjunction_ with using ZFS. If you don''t have a RAID card or array, then still use ZFS so you get the speed and data integrity benefits. There are several threads on the zfs-discuss mailing list which talk about the configs that people have used to setup home NAS servers. The searchable pages are at http://www.opensolaris.org/jive/forum.jspa?forumID=80 You might want to have a look at these two wikidocs: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide cheers, James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
It is indeed true and yoi can. On 6/22/08, kevin williams <kevin at netkev.com> wrote:> digg linked to an article related to the apple port of ZFS > (http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss). > I dont have a mac but was interested in ZFS. > > The article says that ZFS eliminates the need for a RAID card and is faster > because the striping is running on the main cpu rather than an old chipset > on a card. My question is, is this true? Can I install opensolaris with > zfs and stripe and mirror a bunch of sata disc for a home NAS server? I > sure would like to do that but the cost of the good raid cards has put me > off; maybe this is the solution. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Mon, Jun 23, 2008 at 11:13:49AM +1200, ian at ianshome.com wrote:> > The cache may give RAID cards an edge, but ZFS gives near platter speeds for > its various configurations. The Thumper is a perfect example of a ZFS > appliance.I get very acceptable performance out of my Sun Ultra-80 with 4x 450Mhz US-II CPUs and 4GB RAM. I can''t wait to upgrade to something a tad faster. :)> So yes, you can use OpenSolaris for a home NAS server.Absolutely, yes. And you don''t need to newest, shiniest hardware to do it either. If you are building some super media streaming monster box, then, well, sure, you do. If you are building your average home NAS box though, it really isn''t nessesary to get the latest and greatest hardware. That being said, the best thing you can do for a machine running ZFS is to give it as much ram as it is able to hold. -brian -- "Coding in C is like sending a 3 year old to do groceries. You gotta tell them exactly what you want or you''ll end up with a cupboard full of pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
On Sun, 22 Jun 2008, kevin williams wrote:> > The article says that ZFS eliminates the need for a RAID card and is > faster because the striping is running on the main cpu rather than > an old chipset on a card. My question is, is this true? Can IDitto what the other guys said. Since ZFS may generate more I/O traffic from the CPU, you will want an adaptor with lots of I/O ports. SATA/SAS with a port per drive is ideal. It is useful to have a NVRAM cache on the card if you will be serving NFS or running a database, although some vendors sell this NVRAM cache as a card which plugs into the backplane and uses a special driver. ZFS is memory-hungry so 4GB of RAM is a good starting point for a server. Make sure that your CPU and OS are able to run a 64-bit kernel. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
I agree to other comments. From the Day 1 ZFS is fine tuned for JBOD''s. While Raid cards are welcome ZFS will perform better with JBOD''s. Most of the Raid cards do have limited power and bandwith to support platter speeds of the newer drives. And ZFS code seems to be more intelligent for caching. A few days a ago a customer tested a Sunfire X4500 connected to a network with 4 x 1 Gbit ethernets. X4500 have modest CPU power and do not use any Raid card. The unit easly performaed 400 MB/sec on write from LAN tests which clearly limited by the ethernet ports. Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at Sun.COM -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Bob Friesenhahn Sent: Monday, June 23, 2008 5:33 AM To: kevin williams Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] raid card vs zfs On Sun, 22 Jun 2008, kevin williams wrote:> > The article says that ZFS eliminates the need for a RAID card and is > faster because the striping is running on the main cpu rather than > an old chipset on a card. My question is, is this true? Can IDitto what the other guys said. Since ZFS may generate more I/O traffic from the CPU, you will want an adaptor with lots of I/O ports. SATA/SAS with a port per drive is ideal. It is useful to have a NVRAM cache on the card if you will be serving NFS or running a database, although some vendors sell this NVRAM cache as a card which plugs into the backplane and uses a special driver. ZFS is memory-hungry so 4GB of RAM is a good starting point for a server. Make sure that your CPU and OS are able to run a 64-bit kernel. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 6/23/08 6:22 AM, "Mertol Ozyoney" <Mertol.Ozyoney at Sun.COM> wrote:> A few days a ago a customer tested a Sunfire X4500 connected to a network > with 4 x 1 Gbit ethernets. X4500 have modest CPU power and do not use any > Raid card. The unit easly performaed 400 MB/sec on write from LAN tests > which clearly limited by the ethernet ports. > > MertolThis is what we are seeing with our X4500. Clearly, the four Ethernet channels are our limiting factor. We put 10Gbps Ethernet on the unit, but as this is currently the only 10-gig host on our network (waiting for Vmware drivers to support the X6250 cards we bought), I can''t really test that fully. We''re using this as a NFS/Samba server, so JBOD with ZFS is "fast enough." I''m waiting for COMSTAR and ADM to really take advantage of the Thumper platform. The "complete storage stack" that Sun and the OpenSolaris project have envisioned will make such "commodity" hardware useful pieces of our solution. I love our EMC/Brocade/HP SAN gear, but it''s just too expensive to scale (particularly when it comes to total data management). Charles
On Mon, 23 Jun 2008, Mertol Ozyoney wrote:> > A few days a ago a customer tested a Sunfire X4500 connected to a network > with 4 x 1 Gbit ethernets. X4500 have modest CPU power and do not use any > Raid card. The unit easly performaed 400 MB/sec on write from LAN tests > which clearly limited by the ethernet ports.Are there any NFS/CIFS write performance numbers for this system using 10Gbit ethernet or Infiniband? How does single client/application write performance compare with multiple-client write performance? I am wondering what percentage of available filesystem bandwidth can be effectively made available to "power user" type clients. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Attached Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at sun.com -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Bob Friesenhahn Sent: Monday, June 23, 2008 11:35 PM To: Mertol Ozyoney Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] raid card vs zfs On Mon, 23 Jun 2008, Mertol Ozyoney wrote:> > A few days a ago a customer tested a Sunfire X4500 connected to a network > with 4 x 1 Gbit ethernets. X4500 have modest CPU power and do not use any > Raid card. The unit easly performaed 400 MB/sec on write from LAN tests > which clearly limited by the ethernet ports.Are there any NFS/CIFS write performance numbers for this system using 10Gbit ethernet or Infiniband? How does single client/application write performance compare with multiple-client write performance? I am wondering what percentage of available filesystem bandwidth can be effectively made available to "power user" type clients. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: 523428_0994b0be80e7f14e.pdf Type: application/pdf Size: 167209 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080625/584d7605/attachment.pdf>
I see that the configuration tested in this X4500 writeup only uses the four built-in gigabit ethernet interfaces. This places a natural limit on the amount of data which can stream from the system. For local host access, I am achieving this level of read performance using one StorageTek 2540 (6 mirror pairs) and a single reading process. The X4500 with 48 drives should be capable of far more. The X4500 has two expansion bus slows but they are only 64-bit 133MHz PCI-X so it seems that the ability to add bandwidth via more interfaces is limited. A logical improvement to the design is to offer PCI-E slots which can support 10Gbit ethernet, Infiniband, or Fiber Channel cards so that more of the internal disk bandwidth is available to "power user" type clients. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> I see that the configuration tested in this X4500 writeup only uses > the four built-in gigabit ethernet interfaces. This places a natural > limit on the amount of data which can stream from the system. For > local host access, I am achieving this level of read performance using > one StorageTek 2540 (6 mirror pairs) and a single reading process. > The X4500 with 48 drives should be capable of far more. > > The X4500 has two expansion bus slows but they are only 64-bit 133MHz > PCI-X so it seems that the ability to add bandwidth via more > interfaces is limited. A logical improvement to the design is to > offer PCI-E slots which can support 10Gbit ethernet, Infiniband, or > Fiber Channel cards so that more of the internal disk bandwidth is > available to "power user" type clients. > > Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >Uhhh... 64bit/133mhz is 17Gbit/sec. I *HIGHLY* doubt that bus will be a limit. Without some serious offloading, you aren''t pushing that amount of bandwidth out the card. Most systems I''ve seen top out around 6bit/sec with current drivers. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080625/f0086ed6/attachment.html>
On Wed, 25 Jun 2008, Tim wrote:> > Uhhh... 64bit/133mhz is 17Gbit/sec. I *HIGHLY* doubt that bus will be a > limit. Without some serious offloading, you aren''t pushing that amount of > bandwidth out the card. Most systems I''ve seen top out around 6bit/sec with > current drivers.In that case, perhaps someone has installed a faster interface card and can post some performance numbers. There are useful applications which require 4Gbit streaming performance (300MB/second continuous) to a client via a single interface. Gigabit ethernet is now a typical client desktop interface and clients are able to saturate it. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Wed, Jun 25, 2008 at 1:19 PM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> On Wed, 25 Jun 2008, Tim wrote: > >> >> Uhhh... 64bit/133mhz is 17Gbit/sec. I *HIGHLY* doubt that bus will be a >> limit. Without some serious offloading, you aren''t pushing that amount of >> bandwidth out the card. Most systems I''ve seen top out around 6bit/sec >> with >> current drivers. >> > > In that case, perhaps someone has installed a faster interface card and can > post some performance numbers. > > There are useful applications which require 4Gbit streaming performance > (300MB/second continuous) to a client via a single interface. > > Gigabit ethernet is now a typical client desktop interface and clients are > able to saturate it. > > > Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > >The issue is cost. It''s still cheaper for someone to buy two quad-port gig-e cards and trunk all the interfaces than it is for them to buy a single 10Gb card. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080625/e24c5591/attachment.html>
Tim wrote:> > > On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn > <bfriesen at simple.dallas.tx.us <mailto:bfriesen at simple.dallas.tx.us>> > wrote: > > I see that the configuration tested in this X4500 writeup only uses > the four built-in gigabit ethernet interfaces. This places a natural > limit on the amount of data which can stream from the system. For > local host access, I am achieving this level of read performance using > one StorageTek 2540 (6 mirror pairs) and a single reading process. > The X4500 with 48 drives should be capable of far more. > > The X4500 has two expansion bus slows but they are only 64-bit 133MHz > PCI-X so it seems that the ability to add bandwidth via more > interfaces is limited. A logical improvement to the design is to > offer PCI-E slots which can support 10Gbit ethernet, Infiniband, or > Fiber Channel cards so that more of the internal disk bandwidth is > available to "power user" type clients. > > Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us > <mailto:bfriesen at simple.dallas.tx.us>, > http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > Uhhh... 64bit/133mhz is 17Gbit/sec. I *HIGHLY* doubt that bus will be > a limit. Without some serious offloading, you aren''t pushing that > amount of bandwidth out the card. Most systems I''ve seen top out > around 6bit/sec with current drivers.Ummm 133MHz is just slightly above 1/8 GHz. 64bits is 8 x 8 bits. Multiplying yields 8Gbits/sec or 1GByte/sec. So even if you have two PCI-X (64-bit/133MHz slots) that are independent, that would yield at best 2GB/sec. The SunFire x4500 is capable of doing > 3GB/sec I/O to the disks, so you would still be network band limited. Of course if you are using ZFS and/or mirroring that >3GB/sec from the disks goes down dramatically, so for practical purposes the 2GB/sec limit may well be enough.> > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Wed, Jun 25, 2008 at 3:13 PM, Lida Horn <Lida.Horn at sun.com> wrote:> Tim wrote: > >> >> >> On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn < >> bfriesen at simple.dallas.tx.us <mailto:bfriesen at simple.dallas.tx.us>> >> wrote: >> >> I see that the configuration tested in this X4500 writeup only uses >> the four built-in gigabit ethernet interfaces. This places a natural >> limit on the amount of data which can stream from the system. For >> local host access, I am achieving this level of read performance using >> one StorageTek 2540 (6 mirror pairs) and a single reading process. >> The X4500 with 48 drives should be capable of far more. >> >> The X4500 has two expansion bus slows but they are only 64-bit 133MHz >> PCI-X so it seems that the ability to add bandwidth via more >> interfaces is limited. A logical improvement to the design is to >> offer PCI-E slots which can support 10Gbit ethernet, Infiniband, or >> Fiber Channel cards so that more of the internal disk bandwidth is >> available to "power user" type clients. >> >> Bob >> =====================================>> Bob Friesenhahn >> bfriesen at simple.dallas.tx.us >> <mailto:bfriesen at simple.dallas.tx.us>, >> http://www.simplesystems.org/users/bfriesen/ >> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> >> Uhhh... 64bit/133mhz is 17Gbit/sec. I *HIGHLY* doubt that bus will be a >> limit. Without some serious offloading, you aren''t pushing that amount of >> bandwidth out the card. Most systems I''ve seen top out around 6bit/sec with >> current drivers. >> > Ummm 133MHz is just slightly above 1/8 GHz. 64bits is 8 x 8 bits. > Multiplying yields 8Gbits/sec or 1GByte/sec. > So even if you have two PCI-X (64-bit/133MHz slots) that are independent, > that would yield at best 2GB/sec. > The SunFire x4500 is capable of doing > 3GB/sec I/O to the disks, so you > would still be network > band limited. Of course if you are using ZFS and/or mirroring that > >3GB/sec from the disks goes > down dramatically, so for practical purposes the 2GB/sec limit may well be > enough. > >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> > >Actually 8.5Gbit, I was looking at the DDR line of my chart :) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080625/de4d1cfb/attachment.html>
On 6/25/08 12:57 PM, "Tim" <tim at tcsac.net> wrote:> Uhhh... 64bit/133mhz is 17Gbit/sec. I *HIGHLY* doubt that bus will be a > limit. Without some serious offloading, you aren''t pushing that amount of > bandwidth out the card. Most systems I''ve seen top out around 6bit/sec with > current drivers.Wow, 6bps! You need a new acoustic coupler ;) I think the X4500 designers appreciate the bandwidth ceiling, as the 10Gig card we put in ours is single port, while the cards we have for our X6250s are dual port (PCIe). Charles
On 6/25/08 2:50 PM, "Tim" <tim at tcsac.net> wrote:> The issue is cost. It''s still cheaper for someone to buy two quad-port > gig-e cards and trunk all the interfaces than it is for them to buy a single > 10Gb card.At the moment, this is quite true. Costs per port are going down (even 10Gig) but you get quite good performance with a 4-link aggregate on the X4500. You could go 8-way if you add another 4-port PCI-X card. IIRC, Solaris 10 supports up to 16-way at this speed (but at some point you''re probably hitting a plateau). Charles
On Wed, Jun 25, 2008 at 02:50:40PM -0500, Tim wrote:> > The issue is cost. It''s still cheaper for someone to buy two quad-port > gig-e cards and trunk all the interfaces than it is for them to buy a single > 10Gb card.Huh? You can get a two-port 10Gb card from Sun (without bugging your sales rep for discounts) for $1200. Even with our super-duper EDU discount with Dell, we are still paying >$400 for quad port 1Gb cards. So for $1200 you can get: 2x10Gb = 20Gb or: 3x4Gb = 12Gb So it''s not very true that 10Gb cards cost more than quad port cards. Switch blades on the other hand................ (Yes, we''re a Cisco shop) -brian -- "Coding in C is like sending a 3 year old to do groceries. You gotta tell them exactly what you want or you''ll end up with a cupboard full of pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
Good analysis. Generally X4500 can stream 2,5 GB/sec disk to RAM and around 1,2-1,4 GB/sec ram to disk. Network to disk or disk to network most of the time depends on the network interface. With Infiniband we are seeing 1 GB/sec transfer speeds [2 OS disks, 6 spares] With dual IB initial results shows a sgood sustained 1,2 GB/sec throughput. When you comare this performance with anything offered today, it''s very cost affective. Changing to PCI-E and offering more independent busses will increase the performance slighlty but today X4500 already offers more than enough performance for most things. Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at sun.com -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Bob Friesenhahn Sent: Wednesday, June 25, 2008 6:45 PM To: Mertol Ozyoney Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] raid card vs zfs I see that the configuration tested in this X4500 writeup only uses the four built-in gigabit ethernet interfaces. This places a natural limit on the amount of data which can stream from the system. For local host access, I am achieving this level of read performance using one StorageTek 2540 (6 mirror pairs) and a single reading process. The X4500 with 48 drives should be capable of far more. The X4500 has two expansion bus slows but they are only 64-bit 133MHz PCI-X so it seems that the ability to add bandwidth via more interfaces is limited. A logical improvement to the design is to offer PCI-E slots which can support 10Gbit ethernet, Infiniband, or Fiber Channel cards so that more of the internal disk bandwidth is available to "power user" type clients. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Please note that IO speeds exceeding 1 GB/sec can be limited by several components including the OS or device drivers. Currrent maximum performance I have seen is around 1,25 GB through dual IB. Anyway, that''s a great realworld performance, considering the price and size of the unit. Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at sun.com -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Lida Horn Sent: Wednesday, June 25, 2008 11:14 PM To: Tim Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] raid card vs zfs Tim wrote:> > > On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn > <bfriesen at simple.dallas.tx.us <mailto:bfriesen at simple.dallas.tx.us>> > wrote: > > I see that the configuration tested in this X4500 writeup only uses > the four built-in gigabit ethernet interfaces. This places a natural > limit on the amount of data which can stream from the system. For > local host access, I am achieving this level of read performance using > one StorageTek 2540 (6 mirror pairs) and a single reading process. > The X4500 with 48 drives should be capable of far more. > > The X4500 has two expansion bus slows but they are only 64-bit 133MHz > PCI-X so it seems that the ability to add bandwidth via more > interfaces is limited. A logical improvement to the design is to > offer PCI-E slots which can support 10Gbit ethernet, Infiniband, or > Fiber Channel cards so that more of the internal disk bandwidth is > available to "power user" type clients. > > Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us > <mailto:bfriesen at simple.dallas.tx.us>, > http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > Uhhh... 64bit/133mhz is 17Gbit/sec. I *HIGHLY* doubt that bus will be > a limit. Without some serious offloading, you aren''t pushing that > amount of bandwidth out the card. Most systems I''ve seen top out > around 6bit/sec with current drivers.Ummm 133MHz is just slightly above 1/8 GHz. 64bits is 8 x 8 bits. Multiplying yields 8Gbits/sec or 1GByte/sec. So even if you have two PCI-X (64-bit/133MHz slots) that are independent, that would yield at best 2GB/sec. The SunFire x4500 is capable of doing > 3GB/sec I/O to the disks, so you would still be network band limited. Of course if you are using ZFS and/or mirroring that >3GB/sec from the disks goes down dramatically, so for practical purposes the 2GB/sec limit may well be enough.> > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >_______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss