Ian P. Christian
2006-Sep-30 18:38 UTC
[Xen-users] Network based storage - NBD/AoE/iSCSI other?
I''m currently investigating setting up few Xen servers. I''ve noticed that currently live migration of Xen''s require you to use a network based storage system. So, I''ve looked at a few of the options, and they all strike me as reasonably ''new'' to linux, so I''m not sure how stable these options are. iSCSI looked like a good bet, but the fact it''s using TCP/IP seems like a crazy overhead to use when I''m most likely going to be hosting the storage on the same layer 2 network as the servers. AoE solves that above concern but I''m confused as to what would happen should I need to move from SATA storage drives to SCSI for performance reasons. Someone also advised my against NBD saying that his notes suggest that it will not start servicing a second write untill the first has completed. Having googled I''ve not found a huge amount of documentation on any of these options, and I''m unsure as to what option to do with. I was hoping people might take some time to write to the list with their experiences with these technologies. Many thanks, -- Ian P. Christian ~ http://pookey.co.uk _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra
2006-Sep-30 19:25 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Saturday 30 September 2006 1:38 pm, Ian P. Christian wrote:> AoE solves that above concern but I''m confused as to what would happen > should I need to move from SATA storage drives to SCSI for performance > reasons.i don''t think there''s any "performace reasons" on SCSI vs SATA drives; but note that the AoE ''server'' (vblade) is open source, you can build your own storage with any kind of hardware, RAID and/or volume management, and export it via AoE. the Coraid''s 15-drive box is just a P4 SuperMicro mainboard, with two SuperMicro 8-port SATA cards on a 3U SuperMicro case, and an IDE flashdrive with Plan9 OS and its own software. nothing hard to replicate... -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tom Mornini
2006-Sep-30 19:39 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Sep 30, 2006, at 11:38 AM, Ian P. Christian wrote:> I''m currently investigating setting up few Xen servers. I''ve noticed > that currently live migration of Xen''s require you to use a network > based storage system. > > So, I''ve looked at a few of the options, and they all strike me as > reasonably ''new'' to linux, so I''m not sure how stable these options > are. > > iSCSI looked like a good bet, but the fact it''s using TCP/IP seems > like > a crazy overhead to use when I''m most likely going to be hosting the > storage on the same layer 2 network as the servers.Agreed.> AoE solves that above concern but I''m confused as to what would happen > should I need to move from SATA storage drives to SCSI for performance > reasons.Are you sure that''s the right solution? There are many ways to improve performance, and adding more drives might be a better solution than SCSI drives. SATA supports NCQ, though I understand that Coraid does not yet take advantage of it, and high rotation rate SATA drives are beginning to appear. You can have a LOT more drives with AoE than with just about any other solution out there. Additionally, they''re about to introduce 10GE aggregators and this will improve latency and throughput to or above fibre channel standards. Put me down in the AoE column :-) -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Harald Kubota
2006-Oct-01 00:11 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
Ian P. Christian wrote:> I''m currently investigating setting up few Xen servers. I''ve noticed > that currently live migration of Xen''s require you to use a network > based storage system. >There is no such requirement. FC is certainly ok, as is anything which can be shared and directly connected (e.g. Firewire drives with a bridge from Oxford) and of course the old way of using (parallel) SCSI drives (which have their own problems). Using network is likely the cheapest and most convenient one unless you happen to have FC already.> iSCSI looked like a good bet, but the fact it''s using TCP/IP seems like > a crazy overhead to use when I''m most likely going to be hosting the > storage on the same layer 2 network as the servers. >The overhead is not that large and it''s nice to be using a standard which will continue to exist for a while. Should you need performance, you can replace the iSCSI client (the server with the storage) with a faster box without changing anything at the iSCSI initiator side (the Xen running machine).> AoE solves that above concern but I''m confused as to what would happen > should I need to move from SATA storage drives to SCSI for performance > reasons. >The A in "AoE" only says, that it''s using the ATA protocol to move data. It does not matter what storage device you actually use. Can be a USB stick if you have to. Or any LVM storage. Or 15k rpm FC disks. Both iSCSI and AoE like to have their own storage network, just like any SAN has: otherwise you have contention on the one network you use. That of course does not matter much if the data you handle is in the sub-MB area.> Someone also advised my against NBD saying that his notes suggest that > it will not start servicing a second write untill the first has completed. > >That''s how good mirroring is supposed to work. But if you mirror iSCSI/AoE, you have the same penalty. Then there''s good old NFS you can use. Harald _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Oct-01 04:38 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Sun, 1 Oct 2006, Harald Kubota wrote:> Ian P. Christian wrote: >> I''m currently investigating setting up few Xen servers. I''ve noticed >> that currently live migration of Xen''s require you to use a network >> based storage system. > > Using network is likely the cheapest and most convenient one unless you > happen to have > FC already.If you are the boss, used 1G FC is quite a bit cheaper (and faster) than used 1G Ethernet. However, most bosses refuse to use used stuff; and some people think that commodity Ethernet will scale faster than commodity FC, so it''s better to just run Ethernet everywhere. (these people may be right; my point still stands that 1G fibre channel, bought used, gives you better storage performance per dollar than 1G Ethernet) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tom Mornini
2006-Oct-01 06:30 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Sep 30, 2006, at 9:38 PM, Luke Crawford wrote:> On Sun, 1 Oct 2006, Harald Kubota wrote: >> Ian P. Christian wrote: >>> I''m currently investigating setting up few Xen servers. I''ve >>> noticed >>> that currently live migration of Xen''s require you to use a network >>> based storage system. >> >> Using network is likely the cheapest and most convenient one >> unless you happen to have >> FC already. > > If you are the boss, used 1G FC is quite a bit cheaper (and faster) > than used 1G Ethernet.Faster, probably (I''m certainly not arguing), but the big storage vendors have recently said that 4Gb fiber will be the top speed for years to come. Cheaper? Are you talking about buying used FC disks as well? Because FC disks -vs- SATA disk is no comparison in terms of $/GB. It''s my understanding that *most* FC solutions require FC disks...> However, most bosses refuse to use used stuff; and some people > think that commodity Ethernet will scale faster than commodity FC, > so it''s better to just run Ethernet everywhere. (these people may > be right; my point still stands that 1G fibre channel, bought used, > gives you better storage performance per dollar than 1G Ethernet)Performance, yes, but how about capacity? And just how much faster is it? Here''s an article suggesting 4Gb will dominate the FC market until 2010. http://www.internetnews.com/storage/article.php/3627306 Something tells me commodity 10GE will be cheaper by then. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Oct-01 08:06 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Sat, 30 Sep 2006, Tom Mornini wrote:> On Sep 30, 2006, at 9:38 PM, Luke Crawford wrote: >> If you are the boss, used 1G FC is quite a bit cheaper (and faster) than >> used 1G Ethernet. > > Faster, probably (I''m certainly not arguing), but the big storage vendors > have recently said that 4Gb fiber will be the top speed for years to come.Yes, I think I mentioned that this equation may change when 10G Ethernet becomes affordable. I would be suprised if FC was still the best choice 5 years from now; however, even if I go with 1G ethernet now, I''ll still have to buy all new equipment when the 10G stuff comes out, so I might as well get the most performance for my dollar now. (now, I predict that SAS, and not 10G ethernet will be the best solution 5 years from now, I would be using SAS now if I could buy affordable components from different vendors and reasonably expect them to work together as I can with FC. Of course, this just my prediction, and it is worth exactly what you paid for it.)> Cheaper? Are you talking about buying used FC disks as well? Because FC disks > -vs- SATA disk is no comparison in terms of $/GB. It''s my understanding that > *most* FC solutions require FC disks...You can get 12-bay SATA -> FC arrays for around $1K. If you know where to get cheaper SATA-> gigabit Ethernet arrays, I''d like to know about it. http://cgi.ebay.com/EMC-AX100-Fibre-Channel-SATA-Drive-Array_W0QQitemZ270035121613QQihZ017QQcategoryZ80219QQssPageNameZWDVWQQrdZ1QQcmdZViewItem?hash=item270035121613 I think 7.3Krpm SATA drives are not up to snuff for virtual hosting (at least not on my systems- these disks are quite shared and heavily used; when I was using SATA, I had disk i/o latency issues with only 10 dns/mail/internal infrastructure servers.) I imagine a write-back cache of some sort (that most high-end redundant NAS units have- or simply mounting all your disks async) would solve this problem, but it is rather expensive to do that properly. Most SATA nas units are just a single PC, so if they enable write-back caching and the box panics or your new admin pulls the power plug, you have issues. but really, if you can get away with IDE disk, you can probably get away with NFS over 100Mbps, which is cheaper and easier than FC.>> However, most bosses refuse to use used stuff; and some people think that >> commodity Ethernet will scale faster than commodity FC, so it''s better to >> just run Ethernet everywhere. (these people may be right; my point still >> stands that 1G fibre channel, bought used, gives you better storage >> performance per dollar than 1G Ethernet) > > Performance, yes, but how about capacity? And just how much faster is it?For me, capacity is a minor issue compared to latency under heavy concurrent access. I think IOPS/sec is where SCSI (and SCSI over FC) is really where the FC/SCSI disks show their worth. (and yes, I usually use used disks; I mirror them and run a SMART monitor on them, so the reduced reliability isn''t a huge deal. I would *not* recommend using raid5 with used disks- well, I don''t recommend raid5 in general, except for a substitute for a stripe that is less of a pain-in the ass to rebuild, simply because raid5 performance drops precipitously during a rebuild; your array is essentially down for a day if you are running it near capacity.) Like I said, if you are just going for capacity, use IDE over NFS on a 10/100 network. a 1000 network might be worth it if most of your stuff is sequential (as IDE comes pretty darn close and sometimes beats SCSI for sequential access) but in my environment, there really is no such thing as sequential access. My main point was that compared to a 1GB Ethernet ''dedicated to storage'' network, a 1GB FC network is cheaper and faster; I believe this to be true even if your end disks are SATA. (but like I said, I might be wrong on that part; I''m not really familiar with pricing for network-attached IDE arrays; I can''t afford gigabit Ethernet equipment of a quality I''d like to maintain, and I use 10 or 15K scsi/FC for everything that matters anyhow.) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2006-Oct-01 13:41 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Sat, Sep 30, 2006 at 07:38:41PM +0100, Ian P. Christian wrote:> I''m currently investigating setting up few Xen servers. I''ve noticed > that currently live migration of Xen''s require you to use a network > based storage system. > > So, I''ve looked at a few of the options, and they all strike me as > reasonably ''new'' to linux, so I''m not sure how stable these options are. > > iSCSI looked like a good bet, but the fact it''s using TCP/IP seems like > a crazy overhead to use when I''m most likely going to be hosting the > storage on the same layer 2 network as the servers. >iSCSI is nice, you can play and test with your own custom iSCSI server (=target), maybe done with Linux iSCSI Enterprise Target software and software raid + lvm. If you need more speed, reliability and manageability, you can buy some of the "hardware" targets.. for example Equallogic PS-series: with 750G 7.2K SATA2-drives: http://www.equallogic.com/products/view.aspx?id=1791 with 146G 15K SAS-drives: http://www.equallogic.com/products/view.aspx?id=1989 Those will give you 60 000 IOPS per box.. and 300 MB/sec (3xGE per box). They also contain "group management", so if you have 1 or 20 boxes, you still manage (and use) them like 1 box. Automatic load-balancing between boxes and ports. - Pasi> AoE solves that above concern but I''m confused as to what would happen > should I need to move from SATA storage drives to SCSI for performance > reasons. > > Someone also advised my against NBD saying that his notes suggest that > it will not start servicing a second write untill the first has completed. > > Having googled I''ve not found a huge amount of documentation on any of > these options, and I''m unsure as to what option to do with. I was > hoping people might take some time to write to the list with their > experiences with these technologies. > > Many thanks, > > -- > Ian P. Christian ~ http://pookey.co.uk >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2006-Oct-02 14:08 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
> the Coraid''s 15-drive box is just a P4 SuperMicro mainboard, with two > SuperMicro 8-port SATA cards on a 3U SuperMicro case, and an IDE flashdrive > with Plan9 OS and its own software. nothing hard to replicate...And their performance numbers are relatively terrible, so you should be able to do much better by building it on your own. (Unless the bad performance numbers are due to AoE itself, which I somewhat doubt, but regardless, I was able to do about 30% better with iSCSI.) John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tom Mornini
2006-Oct-02 14:39 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Oct 2, 2006, at 7:08 AM, John Madden wrote:>> the Coraid''s 15-drive box is just a P4 SuperMicro mainboard, with two >> SuperMicro 8-port SATA cards on a 3U SuperMicro case, and an IDE >> flashdrive >> with Plan9 OS and its own software. nothing hard to replicate... > > And their performance numbers are relatively terrible, so you > should be > able to do much better by building it on your own. (Unless the bad > performance numbers are due to AoE itself, which I somewhat doubt, but > regardless, I was able to do about 30% better with iSCSI.)In terms of what? We''re getting a very high IOPS rate, and nearly saturating gigabit ethernet. Not arguing, just curious! -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2006-Oct-02 14:49 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
> > And their performance numbers are relatively terrible, so you > > should be > > able to do much better by building it on your own. (Unless the bad > > performance numbers are due to AoE itself, which I somewhat doubt, but > > regardless, I was able to do about 30% better with iSCSI.) > > In terms of what? We''re getting a very high IOPS rate, and nearly > saturating gigabit ethernet.(Saturating the link is fine as long as your performance is good as well.) Anyway, the numbers I''ve read from them show something in the neighborhood of really terrible: http://www.linuxjournal.com/article/8149 -- 23.58MB/s reads. I''d expect that performance out of a single ATA disk. IOPS is obviously something different, but I don''t believe that article mentions it. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tom Mornini
2006-Oct-02 15:08 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Oct 2, 2006, at 7:49 AM, John Madden wrote:>>> And their performance numbers are relatively terrible, so you >>> should be able to do much better by building it on your own. >>> (Unless >>> the bad performance numbers are due to AoE itself, which I somewhat >>> doubt, but regardless, I was able to do about 30% better with >>> iSCSI.) >> >> In terms of what? We''re getting a very high IOPS rate, and nearly >> saturating gigabit ethernet. > > (Saturating the link is fine as long as your performance is good as > well.) Anyway, the numbers I''ve read from them show something in the > neighborhood of really terrible: > http://www.linuxjournal.com/article/8149 -- 23.58MB/s reads. I''d > expect > that performance out of a single ATA disk. IOPS is obviously something > different, but I don''t believe that article mentions it.That article is 18 months old. That is their older box, which had 10 drives, 10 100Mb ethernet cards and no built-in RAID functionality. It is completely and totally irrelevant in this discussion iSCSI is pointless unless you need to traverse a router and will be slower than AoE. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2006-Oct-02 15:32 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
> That article is 18 months old. > > That is their older box, which had 10 drives, 10 100Mb ethernet > cards > and > no built-in RAID functionality.Ten 100Mb ethernet cards should get you far more than 24MB/s reads. Even if your ATA disks are so slow as to only push 5 or 6 MB/s, you should thus be able to pull down 50 or 60MB/s.> It is completely and totally irrelevant in this discussionEasy, killer. Anyway, my point was not to slam Coraid''s boxes, but to point out that you can build your own AoE targets and likely do better than their hardware can -- 100mbit hardware or not. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Oct-02 17:39 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Mon, 2 Oct 2006, Tom Mornini wrote:>>> the Coraid''s 15-drive box is just a P4 SuperMicro mainboard, with two >>> SuperMicro 8-port SATA cards on a 3U SuperMicro case, and an IDE >>> flashdrive >>> with Plan9 OS and its own software. nothing hard to replicate... >> >> And their performance numbers are relatively terrible, so you should be >> able to do much better by building it on your own. (Unless the bad >> performance numbers are due to AoE itself, which I somewhat doubt, but >> regardless, I was able to do about 30% better with iSCSI.) > > In terms of what? We''re getting a very high IOPS rate, and nearly saturating > gigabit ethernet.write-through or write-back cache? lots of systems default to write-back, as the performance will be worlds better with write-back, at the expence of safety. (just saying ''cause it would explain the different performance numbers) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Oct-02 17:59 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Mon, 2 Oct 2006, John Madden wrote:> (Saturating the link is fine as long as your performance is good as > well.) Anyway, the numbers I''ve read from them show something in the > neighborhood of really terrible: > http://www.linuxjournal.com/article/8149 -- 23.58MB/s reads. I''d expect > that performance out of a single ATA disk. IOPS is obviously something > different, but I don''t believe that article mentions it.I find disk benchmarks with dd amusing. I soppose it works if all you are using it for is backup and other sequential tasks- but for my load, you''d want to benchmark 20 parallel dd processes, and then latency is more important than throuput. The funny thing is, IDE does usually win the thruput race, if you stripe/raid over a couple busses- 23MB/sec is a lot less than I would expect in a sequential transfer from 10+ ide disks. Still, we saw the same thing on our (horribly overpriced) EMC NAS at one of the the last places I worked- I think the partition we were testing had 4 10KRPM scsi drives, and it had 4GB of cache (in a redundant configuration, running in write-back mode) and it had something like 6 gigE connections. from one box, we got mediocre sequential speed; but we found that we could do the same transfer from 10 seperate client boxes, and still get the same speed from each client- making it not so mediocre after all. (I would be quite suprised if the corraid box could stack up to the $100K+ EMC, even given the AoE advantage. (the EMC was NFS) but my point is that being able to adaquately service multiple clients is usually more important than sequential transfer to just one) oh, also; Jumbo Frames make more difference than the NFS server used for large transfers over NFS; I do not know how true that is of AoE, but if I were troubleshooting slow AoE access, jumbo frames would be one of the first things I''d try. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Oct-02 18:07 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Sat, 30 Sep 2006, Ian P. Christian wrote:> I''m currently investigating setting up few Xen servers. I''ve noticed > that currently live migration of Xen''s require you to use a network > based storage system. > > So, I''ve looked at a few of the options, and they all strike me as > reasonably ''new'' to linux, so I''m not sure how stable these options are.If "stable and tested" are the first priorities, I believe NFS has been in use longer than I have been alive. It''s not fast, and you may hit locking issues, but it is extremely simple, and if you need help, any UNIX guy that has been around for more than a couple years should be able to help you out. As I said in a previous post, jumbo frames make a huge difference when it comes to NFS performance; It''s still not fast, but it is stable and tested. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ian P. Christian
2006-Oct-02 18:14 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
Ian P. Christian wrote:> iSCSI looked like a good bet, but the fact it''s using TCP/IP seems like > a crazy overhead to use when I''m most likely going to be hosting the > storage on the same layer 2 network as the servers. > > AoE solves that above concern but I''m confused as to what would happen > should I need to move from SATA storage drives to SCSI for performance > reasons.I''ll be looking at these 2 technologies in more detail (currently i''ve played with AoE, dead easy to setup!). I''ll do my best to document my findings with regards to speed, will be interesting to see the difference. But, they do solve different problems, the fact that iSCSI is routable means that it''s deployable in situations where AoE isn''t (without layer 2 tunneling magic). So, I''ll mail back with my findings, hopefully it will help some. -- Ian P. Christian ~ http://pookey.co.uk _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tom Mornini
2006-Oct-02 19:13 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Oct 2, 2006, at 10:39 AM, Luke Crawford wrote:> On Mon, 2 Oct 2006, Tom Mornini wrote: >>>> the Coraid''s 15-drive box is just a P4 SuperMicro mainboard, >>>> with two >>>> SuperMicro 8-port SATA cards on a 3U SuperMicro case, and an IDE >>>> flashdrive >>>> with Plan9 OS and its own software. nothing hard to replicate... >>> And their performance numbers are relatively terrible, so you >>> should be >>> able to do much better by building it on your own. (Unless the bad >>> performance numbers are due to AoE itself, which I somewhat >>> doubt, but >>> regardless, I was able to do about 30% better with iSCSI.) >> >> In terms of what? We''re getting a very high IOPS rate, and nearly >> saturating gigabit ethernet. > > write-through or write-back cache? lots of systems default to > write-back, as the performance will be worlds better with write- > back, at the expence of safety.The Coraids are write through, i.e. the write returns when it''s committed to disk. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thomas Harold
2006-Oct-09 00:35 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
Luke Crawford wrote:> Like I said, if you are just going for capacity, use IDE over NFS on a > 10/100 network. a 1000 network might be worth it if most of your stuff > is sequential (as IDE comes pretty darn close and sometimes beats SCSI > for sequential access) but in my environment, there really is no such > thing as sequential access. > > My main point was that compared to a 1GB Ethernet ''dedicated to storage'' > network, a 1GB FC network is cheaper and faster; I believe this to be > true even if your end disks are SATA. (but like I said, I might be > wrong on that part; I''m not really familiar with pricing for > network-attached IDE arrays; I can''t afford gigabit Ethernet equipment > of a quality I''d like to maintain, and I use 10 or 15K scsi/FC for > everything that matters anyhow.)For a small business, you can probably setup a small SAN for not that much. 16/24 port "smart" gigabit switches that support VLANs, trunking, jumbo frames can be had for under $500. Put two of those together and you have a fault-tolerant SAN fabric. Dual-port server gigabit NICs for $180 each to connect to the SAN switch fabric. Do some bonding of multiple dual/quad port NICs for bandwidth / fault-tolerance on the SAN unit. We''re slowly rolling out a SAN at our small company. We reckon that even if we move from gigabit iSCSI or AoE to FC down the road, we can still reuse all of the ethernet equipment for other projects. (Such as upgrading from inexpensive "smart" switches to fully managed switches, or upgrading from Intel server NICs to iSCSI HBAs, or buying a pre-built iSCSI target devices.) That''s probably the biggest argument for iSCSI/AoE vs FC. You can get started for under $10k, prove that it works, then decide where you want to spend more money on additional performance while reusing the old equipment to spruce up other sections of your network. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Oct-09 00:53 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Sun, 8 Oct 2006, Thomas Harold wrote:>> My main point was that compared to a 1GB Ethernet ''dedicated to storage'' >> network, a 1GB FC network is cheaper and faster; I believe this to be >> true even if your end disks are SATA. (but like I said, I might be wrong >> on that part; I''m not really familiar with pricing for network-attached >> IDE arrays; I can''t afford gigabit Ethernet equipment of a quality I''d like >> to maintain, and I use 10 or 15K scsi/FC for everything that matters >> anyhow.) > > For a small business, you can probably setup a small SAN for not that much. > 16/24 port "smart" gigabit switches that support VLANs, trunking, jumbo > frames can be had for under $500. Put two of those together and you have a > fault-tolerant SAN fabric. Dual-port server gigabit NICs for $180 each to > connect to the SAN switch fabric. Do some bonding of multiple dual/quad port > NICs for bandwidth / fault-tolerance on the SAN unit.16 port 1G fibre-channel brocades go for around $100. how much are the 16 port cisco gig switches?> That''s probably the biggest argument for iSCSI/AoE vs FC. You can get > started for under $10k, prove that it works, then decide where you want to > spend more money on additional performance while reusing the old equipment to > spruce up other sections of your network.my point is that the iscsi/AoE solution is more expensive, for less performance when compared to a FC solution. you can recoup some of that by reusing the switches later, but it''s still more expensive. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tom Mornini
2006-Oct-09 02:50 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Oct 8, 2006, at 5:53 PM, Luke Crawford wrote:> On Sun, 8 Oct 2006, Thomas Harold wrote: >>> My main point was that compared to a 1GB Ethernet ''dedicated to >>> storage'' network, a 1GB FC network is cheaper and faster; I >>> believe this to be true even if your end disks are SATA. (but >>> like I said, I might be wrong on that part; I''m not really >>> familiar with pricing for network-attached IDE arrays; I can''t >>> afford gigabit Ethernet equipment of a quality I''d like to >>> maintain, and I use 10 or 15K scsi/FC for everything that matters >>> anyhow.) >> >> For a small business, you can probably setup a small SAN for not >> that much. 16/24 port "smart" gigabit switches that support VLANs, >> trunking, jumbo frames can be had for under $500. Put two of >> those together and you have a fault-tolerant SAN fabric. Dual- >> port server gigabit NICs for $180 each to connect to the SAN >> switch fabric. Do some bonding of multiple dual/quad port NICs >> for bandwidth / fault-tolerance on the SAN unit. > > 16 port 1G fibre-channel brocades go for around $100. how much are > the 16 port cisco gig switches?Why do you insist on comparing used fiber channel equipment to new Gigabit? You can absolutely get 16 port gigabit switches for $100. http://tinyurl.com/kyrmp Of course, with fiber channel, you need to add in the expense of fiber channel HBAs as well, and those are more than gigabit ethernet HBAs as well, and many modern motherboards have dual and even quad gigabit ports onboard at no extra cost. Additionally, in a used marketplace, the price paid reflects the market''s judgement on the worth of the item in question. If used ethernet switches did cost more than used fiber channel switches, it''s because people found them worth more. I, for one, have no interest in going down the fiber channel road. It had its place, and still does at the high end for those who can afford it, but it''s pretty clear that an investment in fiber channel is an investment in the past, IMHO. -- -- Tom Mornini _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke Crawford
2006-Oct-09 03:17 UTC
Re: [Xen-users] Network based storage - NBD/AoE/iSCSI other?
On Sun, 8 Oct 2006, Tom Mornini wrote:>> 16 port 1G fibre-channel brocades go for around $100. how much are the 16 >> port cisco gig switches? > > Why do you insist on comparing used fiber channel equipment to new Gigabit?I don''t. I''m comparing used/used, as that is what I use. I am comparing used/server-grade to used/server-grade, not consumer grade.> You can absolutely get 16 port gigabit switches for $100. > > http://tinyurl.com/kyrmpdo you really run netgear in production? Find me a used 16 port server-grade gigE switch from a reputable manufacturer for under $500. Netgear is not a reputable manufacturer of server-grade equipment. Same goes for Linksys. I had some rackmount linksys 8-port unmanaged gig switches ($400/each new) in production about two years ago at one place I worked; I''m not going to put myself through that again.> Of course, with fiber channel, you need to add in the expense of fiber > channel > HBAs as well, and those are more than gigabit ethernet HBAs as well, and many > modern motherboards have dual and even quad gigabit ports onboard at no extra > cost.1G 64-bit PCI-X FC HBAs cost me arund $10/each, when bought used and in quantity (qlogic qla2200)> Additionally, in a used marketplace, the price paid reflects the market''s > judgement on the worth of the item in question. If used ethernet switches > did cost more than used fiber channel switches, it''s because people found > them worth more.In a rational market, yes. However, if you have consumers assuming that ''more expensive is automatically better'' (and we have many) the market quickly becomes irrational. Also, right now used fibre is under priced because new fibre is so redicioulously overpriced. Few of the "cheap" crowd has gotten a chance to work on fibre-channel equipment, and the corporate crowd that does know about fibre-channel isn''t interested in the used market. So you could say that the cost is an educational one, and you would have a point; FC layer 1 and 2 is quite different from Ethernet layer1 and 2, and knowlege of ethernet layer 1 and 2 is quite applicable to the networking realm (where Ethernet will likely be the standard for some time.) For me this learning cost is quite a bit lower, as I have alrealdy invested a lot in learning about the SCSI protocol, and most of that knowledge carries over to fibre-channel.> I, for one, have no interest in going down the fiber channel road. It had > its place, and still does at the high end for those who can afford it, but > it''s pretty clear that an investment in fiber channel is an investment in > the past, IMHO.You are probably going to need to buy new infrastructure when 10G ethernet comes out, yes. but the same can be said for 1G ethernet. I can''t buy tomarows product''s today. My point is if I buy yesterday''s fibre, I beat today''s ethernet, both on price and performance, if the primary costs we are worried about are equipment costs and not training costs. The cost is in learning fibre channel vs using existing Ethernet knowledge; You are going to have to know ethernet either way, and it''s fairly clear that Ethernet is here to stay; wheras fibre may or may not. For me, equipment is a large part of my operating budget, and my existing investment in SCSI knowlege makes fibre knowledge fairly easy to obtain, so fibre makes a lot more sense than Ethernet for storage. If you are in an environment where training was your biggest expense, or where you needed to use new parts, the equation is quite different. "enterprise storage" market is quite irrational. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users