I''m thinking about adding some external mass storage to my Xen system, and I see a number of 1U (I pay by the U at my colo) SAN devices that offer iSCSI. Not too many offer AoE. For cheap performance, AoE seems preferable since it has less overhead. Since the SAN is going to be right next to the Xen box, the routability of iSCSI isn''t a factor for me. Just big, cheap and fast. Anyone have any insights they want to throw out from facing a similar situation? -- Chris ''Xenon'' Hanson, omo sanza lettere Xenon AlphaPixel.com PixelSense Landsat processing now available! http://www.alphapixel.com/demos/ "There is no Truth. There is only Perception. To Perceive is to Exist." - Xen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > I''m thinking about adding some external mass storage to my Xensystem, and I> see a > number of 1U (I pay by the U at my colo) SAN devices that offer iSCSI.Not too> many offer > AoE. For cheap performance, AoE seems preferable since it has lessoverhead.> Since the SAN > is going to be right next to the Xen box, the routability of iSCSIisn''t a> factor for me. > Just big, cheap and fast. > > Anyone have any insights they want to throw out from facing asimilar> situation? >There are plenty of Ethernet adapters with iSCSI offload these days. Support under Linux might be another question though. iSCSI will be able to take advantage of TCP offload (large send, checksum) where AoE will not. That alone may outweigh any overhead savings from AoE. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of Chris ''Xenon'' Hanson > Sent: Thursday, March 11, 2010 1:56 PM > To: xen-users@lists.xensource.com > Subject: [Xen-users] AoE vs iSCSI > > I''m thinking about adding some external mass storage to my Xensystem, and I see> a > number of 1U (I pay by the U at my colo) SAN devices that offer iSCSI.Not too many> offer > AoE. For cheap performance, AoE seems preferable since it has lessoverhead. Since> the SAN > is going to be right next to the Xen box, the routability of iSCSIisn''t a factor for me.> Just big, cheap and fast.I''ve used CoRAID''s AoE products before and recommend them because they are simple, extremely easy to configure, reliable and inexpensive. They have a 1U appliance (SR431) listed in their site for $2,475 USD. That said I wouldn''t choose AoE over iSCSI (or vice versa) for performance reasons--with either you should get adequate performance, if properly configured, and you''re going to find that networks are so much faster than disks that for a small appliance the network doesn''t really matter much--throughput is going to be limited by SATA disk performance. Do you really want a 1U appliance? You won''t get much storage out of it. A 3U appliance can typically support 4x the storage of a 1U unit. Look for a vendor you like, try the product first if possible, choose based on features, performance and cost. The protocol is another consideration altogether--AoE works great for small Linux-only networks. If you need more interoperability (i.e. Windows) you''ll probably want to stick with iSCSI. Using AoE on Linux is about as simple as loading the "aoe" kernel module. The module auto-discovers targets by Ethernet broadcast and makes block devices locally available under /dev/etherd/*. You can then add a partition table to the block device, carve it up with LVM, or pretty much anything you''d do with local storage. AoE is about as easy to use as it can possibly be. (iSCSI on the other hand always has me scurrying for "man" pages first.) Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello, Am 11.03.2010 um 19:55 Uhr schrieb Chris ''Xenon'' Hanson <xenon@alphapixel.com>:> I''m thinking about adding some external mass storage to my Xen system, > and I see a number of 1U (I pay by the U at my colo) SAN devices that offer > iSCSI. Not too many offer AoE. For cheap performance, AoE seems preferable > since it has less overhead. Since the SAN is going to be right next to the > Xen box, the routability of iSCSI isn''t a factor for me. Just big, cheap > and fast. > Anyone have any insights they want to throw out from facing a similar > situation?i testet AoE and iSCSI. AoE scales very bad! If you have more than 10 AoE devices over one NIC you get on one AoE device bad througput and a high load on the system. iSCSI with lots (testet with over 200) of LUNs will performe very good. I was able to get a little more than 100MByte/s over one 1GBit/s NIC with iSCSI! -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of Markus Hochholdinger > Sent: Wednesday, March 17, 2010 3:37 PM > To: xen-users@lists.xensource.com > Cc: Chris ''Xenon'' Hanson > Subject: Re: [Xen-users] AoE vs iSCSI > > i testet AoE and iSCSI. AoE scales very bad! If you have more than 10AoE devices> over one NIC you get on one AoE device bad througput and a high loadon the system.> iSCSI with lots (testet with over 200) of LUNs will performe verygood. I was able to> get a little more than 100MByte/s over one 1GBit/s NIC with iSCSI!All other things being equal, one protocol should not outperform the other by such a wide margin. Your results obviously will depend on the quality of the implementation--i.e. whether you''ve chosen one of the open source AoE targets, or you are using a storage appliance with AoE, which OS/driver version, etc. AoE performance is also highly dependent on your network. Always use jumbo frames and hardware flow control. If you have a switch that doesn''t handle these, get a new switch. -Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello, Am 18.03.2010 um 14:32 Uhr schrieb Jeff Sturm <jeff.sturm@eprize.com>:> > -----Original Message----- > > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > > bounces@lists.xensource.com] On Behalf Of Markus Hochholdinger > > Sent: Wednesday, March 17, 2010 3:37 PM > > To: xen-users@lists.xensource.com > > Cc: Chris ''Xenon'' Hanson > > Subject: Re: [Xen-users] AoE vs iSCSI > > i testet AoE and iSCSI. AoE scales very bad! If you have more than 10 > AoE devices > > over one NIC you get on one AoE device bad througput and a high load > on the system. > > iSCSI with lots (testet with over 200) of LUNs will performe very > good. I was able to > > get a little more than 100MByte/s over one 1GBit/s NIC with iSCSI! > All other things being equal, one protocol should not outperform the > other by such a wide margin. Your results obviously will depend on the > quality of the implementation--i.e. whether you''ve chosen one of the > open source AoE targets, or you are using a storage appliance with AoE, > which OS/driver version, etc. > AoE performance is also highly dependent on your network. Always use > jumbo frames and hardware flow control. If you have a switch that > doesn''t handle these, get a new switch.i made these test in august 2008. I''ve tested gnbd, AoE (vblade-18 and aoe6-63.tar.gz) and iSCSI. All on the same hardware with the same (dom0-)kernel. For gnbd and iSCSI i didn''t optimze anything. For AoE, because of the bad performance, i optimized the network settings, i had a direct connection between two servers, so no switch configuration. I enabled jumbo frames on the NICs and a few other things i don''t remember now. The really bad things was that the AoE client had very bad performance while more than one or two blades were connected, but only one was used! Example: One server with vblade exported one block device over 1GBit/s NIC. On the other server, the client, i got ~100MByte/s as expected. If i configured 10 vblades on the server, connected all 10 to the client and then testet one ether device, i got only ~20MByte/s throughput. Then i configured 100 and got only ~1MByte/s throughput! At this time i choosed to use iSCSI in the future, before that i used gnbd. I never tested AoE again, perhaps the situation is now better? But i give everyone the advice with AoE to test performance with more than one connected AoE block device, if this will be needed. -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: Markus Hochholdinger [mailto:Markus@hochholdinger.net] > Sent: Thursday, March 18, 2010 10:17 AM > To: Jeff Sturm > Cc: xen-users@lists.xensource.com > Subject: Re: [Xen-users] AoE vs iSCSI > > Example: One server with vblade exported one block device over 1GBit/sNIC. On the> other server, the client, i got ~100MByte/s as expected. If iconfigured 10 vblades on> the server, connected all 10 to the client and then testet one etherdevice, i got only> ~20MByte/s throughput. Then i configured 100 and got only ~1MByte/sthroughput! Unfortunately vblade is little more than a toy program. Its beauty lies in its simplicity. Its drawbacks lie also in its simplicity. It''s nice to have as a reference program for those who want to tinker with the protocol or understand how AoE works, but you can''t really draw any conclusions about performance of the AoE protocol from using vblade in general. Vblade is single-threaded and can only issue one outstanding I/O request at a time per device. Multiple vblade processes can run on the same adapter to export multiple disks, and cooperate via packet filtering. I don''t know if the packet filtering was working optimally in the version you tested... based on your results I could guess it was not. You should have better overall results testing with another open source implementation like qaoed, or using Coraid''s commercial product. I have routinely demonstrated 200MB/s throughput performing sequential transfers on an AoE target, multipathing over 2 GigE adapters. There''s nothing wrong with iSCSI either, and many users have perfectly valid reasons to require iSCSI. But you can get comparable performance with AoE for Linux hosts, often spending far less. This is getting a bit off-topic for a Xen list, I''m afraid. -Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 3/18/2010 8:16 AM, Markus Hochholdinger wrote:> At this time i choosed to use iSCSI in the future, before that i used gnbd. I > never tested AoE again, perhaps the situation is now better? But i give > everyone the advice with AoE to test performance with more than one connected > AoE block device, if this will be needed.Interesting. I''d love to hear of any more recent tests, as I was under the assumption that, all things being equal (as in, no special hardware accelerators), AoE was a little lighter weight. My plan would be for using a PC-based SAN box with dual Ethernet ports going to dual small dedicated switches, and then to dual NICs on the Xen box. I don''t want to have to add any new hardware to the Xen box(es), so AoE''s software implementation looked appealing. -- Chris ''Xenon'' Hanson, omo sanza lettere Xenon AlphaPixel.com PixelSense Landsat processing now available! http://www.alphapixel.com/demos/ "There is no Truth. There is only Perception. To Perceive is to Exist." - Xen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > This is getting a bit off-topic for a Xen list, I''m afraid. > > -Jeff > > > I don''t think so, this is very useful information for those of us about toroll out SANs to our Xen clouds. Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 3/18/2010 11:37 AM, Grant McWilliams wrote:> This is getting a bit off-topic for a Xen list, I''m afraid. > -Jeff > I don''t think so, this is very useful information for those of us about > to roll out SANs to our Xen clouds. > Grant McWilliamsObviously, I agree with Grant. I think SAN and Xen are two pieces of the whole puzzle. Xen can reduce or remove your service availability''s tie to a particular CPU, but by itself cannot remove the tie to a particular piece of storage. SAN seems to the be the best way to achieve that goal and achieve the reliability goal. Obviously most people are concerned with money, space and resource limitations, so the choice of iSCSI versus AoE is a big deal in getting the best bang for your infrastructure investment. -- Chris ''Xenon'' Hanson, omo sanza lettere Xenon AlphaPixel.com PixelSense Landsat processing now available! http://www.alphapixel.com/demos/ "There is no Truth. There is only Perception. To Perceive is to Exist." - Xen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello, Am 18.03.2010 um 17:29 Uhr schrieb Jeff Sturm <jeff.sturm@eprize.com>:> > -----Original Message----- > > From: Markus Hochholdinger [mailto:Markus@hochholdinger.net] > > Sent: Thursday, March 18, 2010 10:17 AM > > To: Jeff Sturm > > Cc: xen-users@lists.xensource.com > > Subject: Re: [Xen-users] AoE vs iSCSI[..]> Unfortunately vblade is little more than a toy program. Its beauty lies > in its simplicity. Its drawbacks lie also in its simplicity. It''s nice > to have as a reference program for those who want to tinker with the > protocol or understand how AoE works, but you can''t really draw any > conclusions about performance of the AoE protocol from using vblade in > general.well, vblade was the server part and on my AoE client i only run the aoe module, so i thought it would make no difference what "server" part i use because i testet on the client side. The hardware had no problem servicing with vblade. The problem was on the client side! My assumption was that also a coraid product as server wouldn''t do because the problem was on the client side. But perhaps i''m wrong. [..]> You should have better overall results testing with another open source > implementation like qaoed, or using Coraid''s commercial product. I have > routinely demonstrated 200MB/s throughput performing sequential > transfers on an AoE target, multipathing over 2 GigE adapters.At this time qaoed wasn''t an option, i tried it but it was worse than vblade!> There''s nothing wrong with iSCSI either, and many users have perfectly > valid reasons to require iSCSI. But you can get comparable performance > with AoE for Linux hosts, often spending far less.OK, i don''t doubt that, but i would test it myself :-D For now, i don''t have the need to test again because i''m very fine with iSCSI. Also a lot of people are fine with AoE.> This is getting a bit off-topic for a Xen list, I''m afraid.Especially the point that more than one AoE block devices had a bad impact on performance for me i advice other people who need more than one AoE block devices to test the performance. Hopefully this isn''t the case anymore. For a Xen setup where you have one block device per domU it is very important to know how much block devices you can manage - so for me this isn''t really offtopic. -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users