Hi Everyone, I have 3 RAID ideas, and I''d appreciate some advice on which would be better for lots of VMs for customers. My storage server will be able to hold 16 disks. I am going to export 1 iSCSI LUN to each xen node. 6 nodes will connect to one storage server, so that''s 6 LUNs per server of equal size. The server will connect to a switch using quad port bonded NICs (802.3ad), and each Xen node will connect to the switch using Dual port bonded NICs. Idea 1: 3 X RAID10 Arrays (4 disks per array) 4 Hot Spares 2 LUNs per array Idea 2: 1 X RAID10 Array (12 disks per array) 4 Hot Spares 6 LUNs per array Idea 3: 1 X RAID 10 Array (14 disks per array) 2 Hot Spares 6 LUNs per array I''d appreciate any thoughts or ideas on which would be best for throughput/IPOS Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> I have 3 RAID ideas, and I''d appreciate some advice on which would be > better for lots of VMs for customers. > > My storage server will be able to hold 16 disks. I am going to export 1 > iSCSI LUN to each xen node. 6 nodes will connect to one storage server, > so that''s 6 LUNs per server of equal size. The server will connect to a > switch using quad port bonded NICs (802.3ad), and each Xen node will > connect to the switch using Dual port bonded NICs.hmmm... with one LUN per server you will loose the ability to do live migration -- or do I miss something? Some people mention problems with bonding more than two NICs for iSCSI as the reordering of the commands/packets adds tremendously to latency and load. If you want high performance and avoid latency issues you might want to choose ATA-over-Ethernet.> I''d appreciate any thoughts or ideas on which would be best for > throughput/IPOSYour server is a Linux box exporting the RAIDs to your Xen servers? Then just take fio and do some benchmarking. If you''re using software raid than you might want to add RAID5 to the equation. I''d suggest to measure performance of your RAID system with various configurations and then choose which level of isolation gives the best performance. I don''t think a setup with 6 hot spare disks is necessary -- at least not when they''re connected to the same server. Depending on the quality of your disks 1 to 3 should suffice. With eg. 1 hot spare in the server plus some cold spares in your office you should be able to survive a broken harddisk. You should also "smartctl -t long" your disks frequently (ie once per week) and do more or less permanent resync of your raid to be able to detect disk errors early. (The worst case scenario is to never check your disks -- then a disk is broken and replaced by a hot/cold spare -- and raid resync fails other disks on your array, just because the bad blocks are already there...) Hope this helps -- Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 08:32 To: Jonathan Tripathy Cc: Xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> I have 3 RAID ideas, and I''d appreciate some advice on which would be > better for lots of VMs for customers. > > My storage server will be able to hold 16 disks. I am going to export 1 > iSCSI LUN to each xen node. 6 nodes will connect to one storage server, > so that''s 6 LUNs per server of equal size. The server will connect to a > switch using quad port bonded NICs (802.3ad), and each Xen node will > connect to the switch using Dual port bonded NICs.hmmm... with one LUN per server you will loose the ability to do live migration -- or do I miss something? Some people mention problems with bonding more than two NICs for iSCSI as the reordering of the commands/packets adds tremendously to latency and load. If you want high performance and avoid latency issues you might want to choose ATA-over-Ethernet.> I''d appreciate any thoughts or ideas on which would be best for > throughput/IPOSYour server is a Linux box exporting the RAIDs to your Xen servers? Then just take fio and do some benchmarking. If you''re using software raid than you might want to add RAID5 to the equation. I''d suggest to measure performance of your RAID system with various configurations and then choose which level of isolation gives the best performance. I don''t think a setup with 6 hot spare disks is necessary -- at least not when they''re connected to the same server. Depending on the quality of your disks 1 to 3 should suffice. With eg. 1 hot spare in the server plus some cold spares in your office you should be able to survive a broken harddisk. You should also "smartctl -t long" your disks frequently (ie once per week) and do more or less permanent resync of your raid to be able to detect disk errors early. (The worst case scenario is to never check your disks -- then a disk is broken and replaced by a hot/cold spare -- and raid resync fails other disks on your array, just because the bad blocks are already there...) Hope this helps -- Adi ------------------------------------------------------------------------------------------------------------------- Hi Adi, Thanks for the advice! The RAID controller I''m planning to use is the MegaRAID SAS 9260-4i. The storage server will be built by Broadberry, so it will be using Supermicro kit. As for the O/S on the server, I was thinking of using Windows Storage Server actually, however maybe this is a bad idea? You''re correct about the live migration, however I may implement some sort of clustering iSCSI filesystem, however the main issue at the minute is the RAID array. I''ve heard the same things about bonding 2 vs 4 NICs as well. Currently, I''m leaning towards the RAID10 array with 14 disks with 2 hot spares Thanks Jonathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, I like the sound of idea 1 best. One big Raid 10 might sound nice but are you sure it is purely bandwidth you need. For small file latency I think a number of smaller arrays spread between the different VMs might be faster (eg. 4 Raid 10 or 4 Raid 5). Seperate arrays also provides some degree of performance isolation between the LUNs. The Raid 1 part of raid 10 does allow for read interleaving but if you have random mixed reads and writes occurring fairly evenly across the VMs then separate arrays should be more responsive (Even with read and write caching enabled on the raid card). The way to find out is to benchmark with multiple VMs simultaneously. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 17 June 2010 09:09 To: Adi Kriegisch; Xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array ________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 08:32 To: Jonathan Tripathy Cc: Xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> I have 3 RAID ideas, and I''d appreciate some advice on which would be > better for lots of VMs for customers. > > My storage server will be able to hold 16 disks. I am going to export1> iSCSI LUN to each xen node. 6 nodes will connect to one storageserver,> so that''s 6 LUNs per server of equal size. The server will connect toa> switch using quad port bonded NICs (802.3ad), and each Xen node will > connect to the switch using Dual port bonded NICs.hmmm... with one LUN per server you will loose the ability to do live migration -- or do I miss something? Some people mention problems with bonding more than two NICs for iSCSI as the reordering of the commands/packets adds tremendously to latency and load. If you want high performance and avoid latency issues you might want to choose ATA-over-Ethernet.> I''d appreciate any thoughts or ideas on which would be best for > throughput/IPOSYour server is a Linux box exporting the RAIDs to your Xen servers? Then just take fio and do some benchmarking. If you''re using software raid than you might want to add RAID5 to the equation. I''d suggest to measure performance of your RAID system with various configurations and then choose which level of isolation gives the best performance. I don''t think a setup with 6 hot spare disks is necessary -- at least not when they''re connected to the same server. Depending on the quality of your disks 1 to 3 should suffice. With eg. 1 hot spare in the server plus some cold spares in your office you should be able to survive a broken harddisk. You should also "smartctl -t long" your disks frequently (ie once per week) and do more or less permanent resync of your raid to be able to detect disk errors early. (The worst case scenario is to never check your disks -- then a disk is broken and replaced by a hot/cold spare -- and raid resync fails other disks on your array, just because the bad blocks are already there...) Hope this helps -- Adi ------------------------------------------------------------------------ ------------------------------------------- Hi Adi, Thanks for the advice! The RAID controller I''m planning to use is the MegaRAID SAS 9260-4i. The storage server will be built by Broadberry, so it will be using Supermicro kit. As for the O/S on the server, I was thinking of using Windows Storage Server actually, however maybe this is a bad idea? You''re correct about the live migration, however I may implement some sort of clustering iSCSI filesystem, however the main issue at the minute is the RAID array. I''ve heard the same things about bonding 2 vs 4 NICs as well. Currently, I''m leaning towards the RAID10 array with 14 disks with 2 hot spares Thanks Jonathan The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Some people mention problems with bonding more than two NICs for iSCSIas> the reordering of the commands/packets adds tremendously to latencyand load.> If you want high performance and avoid latency issues you might wantto> choose ATA-over-Ethernet.Interesting. I always imagined that iSCSI was going to be the clear winner in the case of one NIC as the hardware offloading should more than compensate for the overhead of TCP, but I guess you lose all of that when you start bonding links together, especially on the receive side of things. Some adapters have iSCSI specific acceleration, but I don''t know if that works with bonding at any level. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Rob, Good tip. Can you suggest a way I could benchmark all these things? I''ve never benchmarked Hard Drives before.. Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 10:06 To: Jonathan Tripathy; Adi Kriegisch; Xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Hi, I like the sound of idea 1 best. One big Raid 10 might sound nice but are you sure it is purely bandwidth you need. For small file latency I think a number of smaller arrays spread between the different VMs might be faster (eg. 4 Raid 10 or 4 Raid 5). Seperate arrays also provides some degree of performance isolation between the LUNs. The Raid 1 part of raid 10 does allow for read interleaving but if you have random mixed reads and writes occurring fairly evenly across the VMs then separate arrays should be more responsive (Even with read and write caching enabled on the raid card). The way to find out is to benchmark with multiple VMs simultaneously. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 17 June 2010 09:09 To: Adi Kriegisch; Xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array ________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 08:32 To: Jonathan Tripathy Cc: Xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> I have 3 RAID ideas, and I''d appreciate some advice on which would be > better for lots of VMs for customers. > > My storage server will be able to hold 16 disks. I am going to export 1 > iSCSI LUN to each xen node. 6 nodes will connect to one storage server, > so that''s 6 LUNs per server of equal size. The server will connect to a > switch using quad port bonded NICs (802.3ad), and each Xen node will > connect to the switch using Dual port bonded NICs.hmmm... with one LUN per server you will loose the ability to do live migration -- or do I miss something? Some people mention problems with bonding more than two NICs for iSCSI as the reordering of the commands/packets adds tremendously to latency and load. If you want high performance and avoid latency issues you might want to choose ATA-over-Ethernet.> I''d appreciate any thoughts or ideas on which would be best for > throughput/IPOSYour server is a Linux box exporting the RAIDs to your Xen servers? Then just take fio and do some benchmarking. If you''re using software raid than you might want to add RAID5 to the equation. I''d suggest to measure performance of your RAID system with various configurations and then choose which level of isolation gives the best performance. I don''t think a setup with 6 hot spare disks is necessary -- at least not when they''re connected to the same server. Depending on the quality of your disks 1 to 3 should suffice. With eg. 1 hot spare in the server plus some cold spares in your office you should be able to survive a broken harddisk. You should also "smartctl -t long" your disks frequently (ie once per week) and do more or less permanent resync of your raid to be able to detect disk errors early. (The worst case scenario is to never check your disks -- then a disk is broken and replaced by a hot/cold spare -- and raid resync fails other disks on your array, just because the bad blocks are already there...) Hope this helps -- Adi ------------------------------------------------------------------------------------------------------------------- Hi Adi, Thanks for the advice! The RAID controller I''m planning to use is the MegaRAID SAS 9260-4i. The storage server will be built by Broadberry, so it will be using Supermicro kit. As for the O/S on the server, I was thinking of using Windows Storage Server actually, however maybe this is a bad idea? You''re correct about the live migration, however I may implement some sort of clustering iSCSI filesystem, however the main issue at the minute is the RAID array. I''ve heard the same things about bonding 2 vs 4 NICs as well. Currently, I''m leaning towards the RAID10 array with 14 disks with 2 hot spares Thanks Jonathan The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> > Some people mention problems with bonding more than two NICs for iSCSI as > > the reordering of the commands/packets adds tremendously to latency and load. > > If you want high performance and avoid latency issues you might want to > > choose ATA-over-Ethernet. > > Interesting. I always imagined that iSCSI was going to be the clear > winner in the case of one NIC as the hardware offloading should more > than compensate for the overhead of TCP, but I guess you lose all of > that when you start bonding links together, especially on the receive > side of things.The charme of ATA-over-Ethernet is that bonding isn''t needed at all for using multiple links. You even don''t need HSRP and stuff like that for redundancy. The "Ethernet" part in AoE means that ATA commands are embedded in ethernet frames. Load balancing happens just automagically by using all available interfaces; no special configuration is needed for this.> Some adapters have iSCSI specific acceleration, but I don''t know if that > works with bonding at any level.It will not as bonding normally isn''t supported by NICs. To support such a thing, NICs would need shared memory and a shared dedicated processor that is able to handle this -- which in practice is the CPU and memory of the host operating system. -- Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> Can you suggest a way I could benchmark all these things? I''ve never > benchmarked Hard Drives before..As mentioned in my previous mail -- use (for example) fio to do benchmarks. Before you start with benchmarking I suggest to read the following slides: http://www.pgcon.org/2009/schedule/attachments/123_pg-benchmarking-2.pdf There are many very clever hints in this document. Getting benchmarking right is very hard and giving general statements about performance is (close to) impossible... ;-) Just one more thing: You cannot compare local benchmarks to benchmarks of iSCSI/AoE targets. Devices connected to your local bus have a way lower latency! This results in lower throughput mainly. RAID0 on local disks gives way higher throughput in terms of MBps than network storage. Therefor "IOPS" were invented... ;-) Feel free to ask, if you need more specific hints on how to conduct benchmarks... I have recently done some work in this area. -- Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Adi, You''ve got me very interested in ATA-over-ethernet. Reading around online, it seems very simple! I have a couple of questions if you don''t mind: 1) Can I export LVM volumes as block devices using ATAoE? Then, the "clients" (Xen nodes) can do their own LVM stuff with the exported "device".. 2) How would I use 802.3ad "link agregation" with ATAoE? Thanks ________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 11:02 To: Jonathan Tripathy Cc: Robert Dunkley; Xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> Can you suggest a way I could benchmark all these things? I''ve never > benchmarked Hard Drives before..As mentioned in my previous mail -- use (for example) fio to do benchmarks. Before you start with benchmarking I suggest to read the following slides: http://www.pgcon.org/2009/schedule/attachments/123_pg-benchmarking-2.pdf There are many very clever hints in this document. Getting benchmarking right is very hard and giving general statements about performance is (close to) impossible... ;-) Just one more thing: You cannot compare local benchmarks to benchmarks of iSCSI/AoE targets. Devices connected to your local bus have a way lower latency! This results in lower throughput mainly. RAID0 on local disks gives way higher throughput in terms of MBps than network storage. Therefor "IOPS" were invented... ;-) Feel free to ask, if you need more specific hints on how to conduct benchmarks... I have recently done some work in this area. -- Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks for this Rob and for being very helpful. What is your view on ATA over Ethernet? It seems that it can work better with 802.3ad link agregation, and may be simplier to set up.. Cheers ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 11:07 To: Jonathan Tripathy Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, There is the complicated scripted scientific approach which I did not have time for when I constructed things here although others on the list might be able to help you with that sort of benchamrking. I just ran Bonnie and timed DD on a couple of Linux VMs whilst running Sandra on couple of Windows VMs and coincide them to roughly finish at the same time. Whichever setup provided decent all round results whilst all were running would be my choice. The scheduler selection in Dom0 also affected disk performance a fair bit. I also tested the replicated arrays we have using IOZone from Dom 0. I attach my results from one of our systems using a simple raid 1 array of both 7.2K SATA and 15K SAS disks. It will hopefully show you the affect of different raid controller settings on different IO usage scenarios. Our setup here was storage on the VM servers but replicated between them using DRBD, might sound different to yours but testing is similar. Testing first on the local arrays and tweaking the raid controller settings and driver along with local IO cache settings would be your first step. Then team up your NICs and use something like iperf to tweak your MTU and other settings for max bandwidth. Then do the same IOZone tests from a Dom0 using ISCSI and try to optimise your ISCSI as best you can. Lastly test from the VM and optimise the Xen config as best you can. Splitting the above tasks will allow you to work on one area at a time and aim any questions you might have at the correct mailing list / forum for each one. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 17 June 2010 10:37 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Hi Rob, Good tip. Can you suggest a way I could benchmark all these things? I''ve never benchmarked Hard Drives before.. Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 10:06 To: Jonathan Tripathy; Adi Kriegisch; Xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Hi, I like the sound of idea 1 best. One big Raid 10 might sound nice but are you sure it is purely bandwidth you need. For small file latency I think a number of smaller arrays spread between the different VMs might be faster (eg. 4 Raid 10 or 4 Raid 5). Seperate arrays also provides some degree of performance isolation between the LUNs. The Raid 1 part of raid 10 does allow for read interleaving but if you have random mixed reads and writes occurring fairly evenly across the VMs then separate arrays should be more responsive (Even with read and write caching enabled on the raid card). The way to find out is to benchmark with multiple VMs simultaneously. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 17 June 2010 09:09 To: Adi Kriegisch; Xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array ________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 08:32 To: Jonathan Tripathy Cc: Xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> I have 3 RAID ideas, and I''d appreciate some advice on which would be > better for lots of VMs for customers. > > My storage server will be able to hold 16 disks. I am going to export 1 > iSCSI LUN to each xen node. 6 nodes will connect to one storage server, > so that''s 6 LUNs per server of equal size. The server will connect to a > switch using quad port bonded NICs (802.3ad), and each Xen node will > connect to the switch using Dual port bonded NICs.hmmm... with one LUN per server you will loose the ability to do live migration -- or do I miss something? Some people mention problems with bonding more than two NICs for iSCSI as the reordering of the commands/packets adds tremendously to latency and load. If you want high performance and avoid latency issues you might want to choose ATA-over-Ethernet.> I''d appreciate any thoughts or ideas on which would be best for > throughput/IPOSYour server is a Linux box exporting the RAIDs to your Xen servers? Then just take fio and do some benchmarking. If you''re using software raid than you might want to add RAID5 to the equation. I''d suggest to measure performance of your RAID system with various configurations and then choose which level of isolation gives the best performance. I don''t think a setup with 6 hot spare disks is necessary -- at least not when they''re connected to the same server. Depending on the quality of your disks 1 to 3 should suffice. With eg. 1 hot spare in the server plus some cold spares in your office you should be able to survive a broken harddisk. You should also "smartctl -t long" your disks frequently (ie once per week) and do more or less permanent resync of your raid to be able to detect disk errors early. (The worst case scenario is to never check your disks -- then a disk is broken and replaced by a hot/cold spare -- and raid resync fails other disks on your array, just because the bad blocks are already there...) Hope this helps -- Adi ------------------------------------------------------------------------------------------------------------------- Hi Adi, Thanks for the advice! The RAID controller I''m planning to use is the MegaRAID SAS 9260-4i. The storage server will be built by Broadberry, so it will be using Supermicro kit. As for the O/S on the server, I was thinking of using Windows Storage Server actually, however maybe this is a bad idea? You''re correct about the live migration, however I may implement some sort of clustering iSCSI filesystem, however the main issue at the minute is the RAID array. I''ve heard the same things about bonding 2 vs 4 NICs as well. Currently, I''m leaning towards the RAID10 array with 14 disks with 2 hot spares Thanks Jonathan The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> You''ve got me very interested in ATA-over-ethernet. Reading around online, it > seems very simple! I have a couple of questions if you don''t mind: > > 1) Can I export LVM volumes as block devices using ATAoE? Then, the "clients" > (Xen nodes) can do their own LVM stuff with the exported "device"..Basically yes. You may export just anything that Linux can use as a disk... Although the usual issues may arise with nested LVMs. I am using CLVM on the Xen servers and just export the whole storage to the servers. Then give certain logical volumes to the DomUs. That way I have a central place to manage all volumes and avoid nested logical volumes.> 2) How would I use 802.3ad "link agregation" with ATAoE?You would not. Just use (for example) 4 NICs for AoE and the kernel driver will automatically balance your traffic over all available interfaces. The kernel knows which interfaces can reach which from the target and does all the balancing. This even allows high availability when using two switches: just connect half of the interfaces to one switch and the others to the other switch (do the same with your storage system) and you''re done. You may just plug the power of one switch and everything will (with a very short delay) continue to work -- with half the bandwidth, of course. -- Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> 2) How would I use 802.3ad "link agregation" with ATAoE?You would not. Just use (for example) 4 NICs for AoE and the kernel driver will automatically balance your traffic over all available interfaces. The kernel knows which interfaces can reach which from the target and does all the balancing. This even allows high availability when using two switches: just connect half of the interfaces to one switch and the others to the other switch (do the same with your storage system) and you''re done. You may just plug the power of one switch and everything will (with a very short delay) continue to work -- with half the bandwidth, of course. -- Adi ------------------------------------------------------------------------------------------- Ok so I was reading http://www.howtoforge.com/ata_over_ethernet_debian_etch and they "export" a partition via vbladed 0 1 eth0 /dev/sdd5 If i wanted to use link agregation, what interface would I put in? Also, would I need to set up my switch with 802.3ad/LACP "trunks". Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> I just ran Bonnie and timed DD on a couple of Linux VMs whilst running Sandra > on couple of Windows VMs and coincide them to roughly finish at the same time. > Whichever setup provided decent all round results whilst all were running would > be my choice. The scheduler selection in Dom0 also affected disk performance a > fair bit.Hmm... basically bonnie and dd is a good choice -- but you''ll only measure performance in MBps which is not the best choice for network storage. Using IOPS -- Input/Output Operations per Second -- is way more what you want. You may use "iostat" on existing servers to get an idea of what your current workloads are (tps corresponds to IOPS).> Testing first on the local arrays and tweaking the raid controller settings and > driver along with local IO cache settings would be your first step. Then team > up your NICs and use something like iperf to tweak your MTU and other settings > for max bandwidth. Then do the same IOZone tests from a Dom0 using ISCSI and > try to optimise your ISCSI as best you can. Lastly test from the VM and > optimise the Xen config as best you can. Splitting the above tasks will allow > you to work on one area at a time and aim any questions you might have at the > correct mailing list / forum for each one.Just a perfect description on what to do: step-by-step and structured! -- Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> Ok so I was reading http://www.howtoforge.com/ata_over_ethernet_debian_etch > > and they "export" a partition via > > vbladed 0 1 eth0 /dev/sdd5 > > If i wanted to use link agregation, what interface would I put in? Also, would > I need to set up my switch with 802.3ad/LACP "trunks".Ok, if you insist on LACP, then you''d start it like this: vbladed 0 1 bond0 /dev/sdd5 Without bonding you''d start several vblades with all the interfaces, like that: vbladed 0 1 eth0 /dev/sdd5 vbladed 0 1 eth1 /dev/sdd5 vbladed 0 1 eth2 /dev/sdd5 ... But I''d suggest to use some other AoE initiator implementations like qaoed[1] or ggaoed[2]. Coraid speaks of vblade as a reference implementation that works but lacks features and performance. I hope this helps! -- Adi [1] http://code.google.com/p/qaoed/ [2] http://code.google.com/p/ggaoed/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 11:54 To: Jonathan Tripathy Cc: Adi Kriegisch; Xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> Ok so I was reading http://www.howtoforge.com/ata_over_ethernet_debian_etch > > and they "export" a partition via > > vbladed 0 1 eth0 /dev/sdd5 > > If i wanted to use link agregation, what interface would I put in? Also, would > I need to set up my switch with 802.3ad/LACP "trunks".Ok, if you insist on LACP, then you''d start it like this: vbladed 0 1 bond0 /dev/sdd5 Without bonding you''d start several vblades with all the interfaces, like that: vbladed 0 1 eth0 /dev/sdd5 vbladed 0 1 eth1 /dev/sdd5 vbladed 0 1 eth2 /dev/sdd5 ... But I''d suggest to use some other AoE initiator implementations like qaoed[1] or ggaoed[2]. Coraid speaks of vblade as a reference implementation that works but lacks features and performance. I hope this helps! -- Adi [1] http://code.google.com/p/qaoed/ [2] http://code.google.com/p/ggaoed/ ------------------------------------------------------------------------------------------------------------- So seriously, if I just wanted to keep it simple, and you vbladed, all I would need to do on the storage server is vbladed 0 1 eth0 /dev/sdd5 vbladed 0 1 eth1 /dev/sdd5 vbladed 0 1 eth2 /dev/sdd5 ... and that''s it? Nothing on the switch? What about the other end (Client)? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thursday 17 June 2010 09:32:37 Adi Kriegisch wrote:> Hi! > > > I have 3 RAID ideas, and I''d appreciate some advice on which would be > > better for lots of VMs for customers. > > > > My storage server will be able to hold 16 disks. I am going to export 1 > > iSCSI LUN to each xen node. 6 nodes will connect to one storage server, > > so that''s 6 LUNs per server of equal size. The server will connect to a > > switch using quad port bonded NICs (802.3ad), and each Xen node will > > connect to the switch using Dual port bonded NICs. > > hmmm... with one LUN per server you will loose the ability to do live > migration -- or do I miss something? > Some people mention problems with bonding more than two NICs for iSCSI as > the reordering of the commands/packets adds tremendously to latency and > load. If you want high performance and avoid latency issues you might want > to choose ATA-over-Ethernet.If I understand correctly, you could do live migration, but you would have to migrate them all at once.> > I''d appreciate any thoughts or ideas on which would be best for > > throughput/IPOS > > Your server is a Linux box exporting the RAIDs to your Xen servers? Then > just take fio and do some benchmarking. If you''re using software raid than > you might want to add RAID5 to the equation. > I''d suggest to measure performance of your RAID system with various > configurations and then choose which level of isolation gives the best > performance. > I don''t think a setup with 6 hot spare disks is necessary -- at least not > when they''re connected to the same server. Depending on the quality of your > disks 1 to 3 should suffice. With eg. 1 hot spare in the server plus some > cold spares in your office you should be able to survive a broken harddisk. > You should also "smartctl -t long" your disks frequently (ie once per week) > and do more or less permanent resync of your raid to be able to detect > disk errors early. (The worst case scenario is to never check your disks -- > then a disk is broken and replaced by a hot/cold spare -- and raid resync > fails other disks on your array, just because the bad blocks are already > there...)I''ve been following Jonathan postings for a while and my general feeling is that there''s quite some difference into what he aims for and what reality offers as boundaries. I wish him luck anyway, it would be cool if he could get things working. By the way, I will post my planned setup in response to one of his other postings, might be useful to compare.> Hope this helps > > -- Adi > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Bart, Through this very helpful mailing list, I have over time trimed and changed my plans. My current train of thought is to have 3 RAID10 arrays with 4 hot spares. The storage server will create 2 LV per RAID Array, and export one LV to each Xen node (So 6 Xen nodes per storage server). The storage server will be connected to the xen nodes via ATA over Ethernet. Each machine will be running 56 DomUs max, however this figure will be closer to around 30 I would say. Specs for storage server are as follows: 3U Broadberry 836E16-R1200B chassis (Black) with 1200W high-efficient (1+1) redundant power supply (Gold Level 93%) 16 x SATA/SAS Hot Swap drive bays, comprising of the following system validated components:- 1x E5506 Intel Quad-Core Xeon 2.13GHz 4Mb Cache 4.8GT/s 80Watts 8GB 1333MHz DDR3 ECC Reg w/Parity CL9 DIMM Dual Rank LSI MegaRAID SAS 9260-4i (6Gb/s) RAID Controller Intel® PRO/1000 PT Quad Server Adapter RJ45 10/100/1000 Quad Port X8DTL-iF Serverboard with Dual Gigabit LAN & IPMI Remote Management Slimline DVD-RW Dual Layer Drive RAID Controller Battery Backup Module ( LSIiBBU07) 3U Rackmount rail kit included 2x 250Gb 2.5" Drives for Operating System (RAID 1 - Mirrored) With of course 16 X 1.5TB 7.2k SATA Hard Drives It would be really appreciated if you could please let me know how my current plan is flawed :) Many Thanks Jonathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> So seriously, if I just wanted to keep it simple, and you vbladed, all I would > need to do on the storage server is > vbladed 0 1 eth0 /dev/sdd5 > vbladed 0 1 eth1 /dev/sdd5 > vbladed 0 1 eth2 /dev/sdd5 > ... > and that''s it? Nothing on the switch? What about the other end (Client)?That would be the server side, yes. But -- as I mentioned before -- I strongly suggest to use a different AoE initiator like ggaoed or qaoed. I am pretty sure performance will not be what you''d expect it to be with vblade. On the client side you need to load the "aoe" module. The default module in the kernel is v47. I''d suggest to upgrade to v74 from upstream/Coraid as well -- all those revisions between v47 and v74 bring major and minor enhancements that are just worth it. You might want to tell the kernel module to use only certain interfaces for AoE (like this for example: ''modprobe aoe aoe_iflist=eth1,eth2,eth3,eth4'') -- Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 13:19 To: Jonathan Tripathy Cc: Adi Kriegisch; Xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> So seriously, if I just wanted to keep it simple, and you vbladed, all I would > need to do on the storage server is > vbladed 0 1 eth0 /dev/sdd5 > vbladed 0 1 eth1 /dev/sdd5 > vbladed 0 1 eth2 /dev/sdd5 > ... > and that''s it? Nothing on the switch? What about the other end (Client)?That would be the server side, yes. But -- as I mentioned before -- I strongly suggest to use a different AoE initiator like ggaoed or qaoed. I am pretty sure performance will not be what you''d expect it to be with vblade. On the client side you need to load the "aoe" module. The default module in the kernel is v47. I''d suggest to upgrade to v74 from upstream/Coraid as well -- all those revisions between v47 and v74 bring major and minor enhancements that are just worth it. You might want to tell the kernel module to use only certain interfaces for AoE (like this for example: ''modprobe aoe aoe_iflist=eth1,eth2,eth3,eth4'') -- Adi ------------------------------------------------------------------ Hi Adi, Looking at this page https://help.ubuntu.com/community/HighlyAvailableAoETarget they seem to have made a linux "bond" called bond0 and are telling the AoE target to use that. This confuses me... Would it be of any benifit to create a "mode 4" bond and use 802.3ad with ATA over Ethernet? Or would that be just a waste, when AoE can use the interfaces directly? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > Would it be of any benifit to create a "mode 4" bond and use 802.3adwith ATA> over Ethernet? Or would that be just a waste, when AoE can use theinterfaces> directly? >I could see it being useful if you didn''t dedicate the interfaces to AoE, eg you had some AoE targets and some iSCSI targets. I''m not sure how AoE schedules packets though so maybe this isn''t an issue. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Rob, I looked into 10G Ethernet and it may well be an option, however the cost of the switch is 3x the price I had budgeted for. My current issue is how to use multiple interfaces with ATM over Ethernet. Some say just create 4 exports to the same partition/LV, some say use a bond0.. Given that these will be rented VMs, I would imagine most customers will use these for web-sites, mailservers, backup machines (rsync,ftp), game servers. I can''t imagine a company doing high I/O tasks would want to place this machine "in the cloud" and run it off a VM. But then again, I could be wrong.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 13:56 To: Jonathan Tripathy Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, I don''t think its flawed myself. Maybe an HP 24 port switch with 10gbit uplink would be worth the extra? HP 2910AL-24G Switch - £1700 HP 10Gbit Dual SFP+ 10Gbit Module (J9008A) - £1300 Intel SFP+ 10Gbit card - £500 In theory your arrays could produce around 200Mbytes/sec each. Your actual throughput on dual port ad team will produce about 180Mbytes/sec, 4 port might produce 300mbytes+/sec. 10Gbit will produce 600Mytes/sec quite easily and without the CPU overhead of teaming. The 10gbit need becomes greater if you add more disks via external box to the server later. If 10G is a no go then I would still make sure you get one of the new 1000ET Intel cards. As for how many VMs you will be able to support well that is hugely dependant on their load/use. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 17 June 2010 13:18 To: Bart Coninckx; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Hi Bart, Through this very helpful mailing list, I have over time trimed and changed my plans. My current train of thought is to have 3 RAID10 arrays with 4 hot spares. The storage server will create 2 LV per RAID Array, and export one LV to each Xen node (So 6 Xen nodes per storage server). The storage server will be connected to the xen nodes via ATA over Ethernet. Each machine will be running 56 DomUs max, however this figure will be closer to around 30 I would say. Specs for storage server are as follows: 3U Broadberry 836E16-R1200B chassis (Black) with 1200W high-efficient (1+1) redundant power supply (Gold Level 93%) 16 x SATA/SAS Hot Swap drive bays, comprising of the following system validated components:- 1x E5506 Intel Quad-Core Xeon 2.13GHz 4Mb Cache 4.8GT/s 80Watts 8GB 1333MHz DDR3 ECC Reg w/Parity CL9 DIMM Dual Rank LSI MegaRAID SAS 9260-4i (6Gb/s) RAID Controller Intel® PRO/1000 PT Quad Server Adapter RJ45 10/100/1000 Quad Port X8DTL-iF Serverboard with Dual Gigabit LAN & IPMI Remote Management Slimline DVD-RW Dual Layer Drive RAID Controller Battery Backup Module ( LSIiBBU07) 3U Rackmount rail kit included 2x 250Gb 2.5" Drives for Operating System (RAID 1 - Mirrored) With of course 16 X 1.5TB 7.2k SATA Hard Drives It would be really appreciated if you could please let me know how my current plan is flawed :) Many Thanks Jonathan The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> Looking at this page https://help.ubuntu.com/community/HighlyAvailableAoETarget > they seem to have made a linux "bond" called bond0 and are telling the AoE > target to use that. This confuses me... > Would it be of any benifit to create a "mode 4" bond and use 802.3ad with ATA > over Ethernet? Or would that be just a waste, when AoE can use the interfaces > directly?ggaoed for example can handle multiple interfaces in the configuration and is designed to deliver highest performance with for example automatically load balancing over several NICs. If you want to use vblade you might be better off using bonding because vblade cannot handle several interfaces in one instance. You''ll get another performance penalty when using several instances of vblade listening on different interfaces. I am not sure if LACP enhances performance in your case: I think from one server to the other you will only get 1GBit; for LACP to work as expected you need many-to-many or many-to-one connections. All pakets belonging to a connection will use the same wire. This article has some details: http://serverfault.com/questions/8512/multiplexed-1-gbps-ethernet also Wikipedia has some information on this. Another thing is that you loose the ability of having a redundancy in the switching backend. -- Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 14:03 To: Jonathan Tripathy Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> Looking at this page https://help.ubuntu.com/community/HighlyAvailableAoETarget > they seem to have made a linux "bond" called bond0 and are telling the AoE > target to use that. This confuses me... > Would it be of any benifit to create a "mode 4" bond and use 802.3ad with ATA > over Ethernet? Or would that be just a waste, when AoE can use the interfaces > directly?ggaoed for example can handle multiple interfaces in the configuration and is designed to deliver highest performance with for example automatically load balancing over several NICs. If you want to use vblade you might be better off using bonding because vblade cannot handle several interfaces in one instance. You''ll get another performance penalty when using several instances of vblade listening on different interfaces. I am not sure if LACP enhances performance in your case: I think from one server to the other you will only get 1GBit; for LACP to work as expected you need many-to-many or many-to-one connections. All pakets belonging to a connection will use the same wire. This article has some details: http://serverfault.com/questions/8512/multiplexed-1-gbps-ethernet also Wikipedia has some information on this. Another thing is that you loose the ability of having a redundancy in the switching backend. -- Adi ------------------------------------------------------------------------------------------------------------------- So if I use ggaoed and just put all 4 NICs into its config file, that should allow me to get 4Gbit of bandwidth? And no configuration is required on the switch? BTW, does 802.3ad "mode 4" use LACP? Or I am getting mixed up? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Rob, And if I was to use let''s say 4 teamed ports coming out of the storage server, and 2 teamed ports going into the xen node, would the max I''d get be still 1Gbit? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 14:15 To: Jonathan Tripathy Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, LACP and 802.3AD are used together on those HP Soho switches. I might be wrong but LACP I think allows automatic negotiation to some degree at the switch side. I have used LACP with Broadcom based NICs in Windows and the HP switch you are looking at. You only need to enable LACP on the switch ports plugged into your disk box and then the software on the server should be able to sort the rest (I enabled it with Broadcom NICs under Windows and it worked as advertised). Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 17 June 2010 14:07 To: Adi Kriegisch; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array ________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 14:03 To: Jonathan Tripathy Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> Looking at this page https://help.ubuntu.com/community/HighlyAvailableAoETarget > they seem to have made a linux "bond" called bond0 and are telling the AoE > target to use that. This confuses me... > Would it be of any benifit to create a "mode 4" bond and use 802.3ad with ATA > over Ethernet? Or would that be just a waste, when AoE can use the interfaces > directly?ggaoed for example can handle multiple interfaces in the configuration and is designed to deliver highest performance with for example automatically load balancing over several NICs. If you want to use vblade you might be better off using bonding because vblade cannot handle several interfaces in one instance. You''ll get another performance penalty when using several instances of vblade listening on different interfaces. I am not sure if LACP enhances performance in your case: I think from one server to the other you will only get 1GBit; for LACP to work as expected you need many-to-many or many-to-one connections. All pakets belonging to a connection will use the same wire. This article has some details: http://serverfault.com/questions/8512/multiplexed-1-gbps-ethernet also Wikipedia has some information on this. Another thing is that you loose the ability of having a redundancy in the switching backend. -- Adi ------------------------------------------------------------------------------------------------------------------- So if I use ggaoed and just put all 4 NICs into its config file, that should allow me to get 4Gbit of bandwidth? And no configuration is required on the switch? BTW, does 802.3ad "mode 4" use LACP? Or I am getting mixed up? The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> So if I use ggaoed and just put all 4 NICs into its config file, that should > allow me to get 4Gbit of bandwidth? And no configuration is required on the > switch?That is the idea, yes. When you monitor the netflow with dstat for example you should see all links equally used. The amount of bandwidth used depends on the storage backend as well. ;-) The switch itself does not care about those pakets because they''re just ethernet frames consisting of a source MAC and destination MAC address (and payload, of course) which means you do not need any fancy features on the switch itself. The balancing is done by the aoe module on the client side and by ggaoed on the server side (they''re actually just using all interfaces in a round robin fashion and all MAC addresses of the target they know of). There are mainly two features the switch should support: flow-control and jumbo-frames (MTU of 9000).> BTW, does 802.3ad "mode 4" use LACP? Or I am getting mixed up?802.3ad is LACP. LACP is short for Link Aggregation Control Protocol. (The IEEE name of the standard is 802.1AX) http://en.wikipedia.org/wiki/Link_Aggregation_Control_Protocol -- Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks Rob, I guess I''ll just have to get the money together first :) So I will try out both the 802.ad method as well as the AoE "Load Balanced" method.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 14:33 To: Jonathan Tripathy Cc: kriegisch@vrvis.at Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, Theoretical would be 2Gbit to the Nodes and 4Gbit to the Storage so 2 nodes could for example get 2gbit bandwidth each simultaneously but this does come with some loss in practice and additional CPU overhead. The ATA load balance type situation described in the previous email allows 1gb to any one node from a single device on the storage server but up to 4 nodes could drag 1gbit each simultaneously from the storage server. Considering how many nodes you are planning the load balanced scenario might even be preferable to LACP/802.3AD as long as the ATAoE target software can do a good job due to potentially lower CPU overhead and easier implementation of multiple switches for redundancy. None of these software decision will affect your hardware choice so its portably about time you got your hands dirty J Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 17 June 2010 14:19 To: Robert Dunkley; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Hi Rob, And if I was to use let''s say 4 teamed ports coming out of the storage server, and 2 teamed ports going into the xen node, would the max I''d get be still 1Gbit? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 14:15 To: Jonathan Tripathy Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, LACP and 802.3AD are used together on those HP Soho switches. I might be wrong but LACP I think allows automatic negotiation to some degree at the switch side. I have used LACP with Broadcom based NICs in Windows and the HP switch you are looking at. You only need to enable LACP on the switch ports plugged into your disk box and then the software on the server should be able to sort the rest (I enabled it with Broadcom NICs under Windows and it worked as advertised). Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 17 June 2010 14:07 To: Adi Kriegisch; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array ________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 14:03 To: Jonathan Tripathy Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> Looking at this page https://help.ubuntu.com/community/HighlyAvailableAoETarget > they seem to have made a linux "bond" called bond0 and are telling the AoE > target to use that. This confuses me... > Would it be of any benifit to create a "mode 4" bond and use 802.3ad with ATA > over Ethernet? Or would that be just a waste, when AoE can use the interfaces > directly?ggaoed for example can handle multiple interfaces in the configuration and is designed to deliver highest performance with for example automatically load balancing over several NICs. If you want to use vblade you might be better off using bonding because vblade cannot handle several interfaces in one instance. You''ll get another performance penalty when using several instances of vblade listening on different interfaces. I am not sure if LACP enhances performance in your case: I think from one server to the other you will only get 1GBit; for LACP to work as expected you need many-to-many or many-to-one connections. All pakets belonging to a connection will use the same wire. This article has some details: http://serverfault.com/questions/8512/multiplexed-1-gbps-ethernet also Wikipedia has some information on this. Another thing is that you loose the ability of having a redundancy in the switching backend. -- Adi ------------------------------------------------------------------------------------------------------------------- So if I use ggaoed and just put all 4 NICs into its config file, that should allow me to get 4Gbit of bandwidth? And no configuration is required on the switch? BTW, does 802.3ad "mode 4" use LACP? Or I am getting mixed up? The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Rob, Regarding the number of RAID10 array to use, do you think that 2 X RAID10 arrays (6 disks each) + 4 hot spares would be a good compramise? I''m trying to get the best IOPS for the level of hardware I''m using. Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 14:33 To: Jonathan Tripathy Cc: kriegisch@vrvis.at Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, Theoretical would be 2Gbit to the Nodes and 4Gbit to the Storage so 2 nodes could for example get 2gbit bandwidth each simultaneously but this does come with some loss in practice and additional CPU overhead. The ATA load balance type situation described in the previous email allows 1gb to any one node from a single device on the storage server but up to 4 nodes could drag 1gbit each simultaneously from the storage server. Considering how many nodes you are planning the load balanced scenario might even be preferable to LACP/802.3AD as long as the ATAoE target software can do a good job due to potentially lower CPU overhead and easier implementation of multiple switches for redundancy. None of these software decision will affect your hardware choice so its portably about time you got your hands dirty J Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 17 June 2010 14:19 To: Robert Dunkley; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Hi Rob, And if I was to use let''s say 4 teamed ports coming out of the storage server, and 2 teamed ports going into the xen node, would the max I''d get be still 1Gbit? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 14:15 To: Jonathan Tripathy Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, LACP and 802.3AD are used together on those HP Soho switches. I might be wrong but LACP I think allows automatic negotiation to some degree at the switch side. I have used LACP with Broadcom based NICs in Windows and the HP switch you are looking at. You only need to enable LACP on the switch ports plugged into your disk box and then the software on the server should be able to sort the rest (I enabled it with Broadcom NICs under Windows and it worked as advertised). Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 17 June 2010 14:07 To: Adi Kriegisch; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array ________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 14:03 To: Jonathan Tripathy Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> Looking at this page https://help.ubuntu.com/community/HighlyAvailableAoETarget > they seem to have made a linux "bond" called bond0 and are telling the AoE > target to use that. This confuses me... > Would it be of any benifit to create a "mode 4" bond and use 802.3ad with ATA > over Ethernet? Or would that be just a waste, when AoE can use the interfaces > directly?ggaoed for example can handle multiple interfaces in the configuration and is designed to deliver highest performance with for example automatically load balancing over several NICs. If you want to use vblade you might be better off using bonding because vblade cannot handle several interfaces in one instance. You''ll get another performance penalty when using several instances of vblade listening on different interfaces. I am not sure if LACP enhances performance in your case: I think from one server to the other you will only get 1GBit; for LACP to work as expected you need many-to-many or many-to-one connections. All pakets belonging to a connection will use the same wire. This article has some details: http://serverfault.com/questions/8512/multiplexed-1-gbps-ethernet also Wikipedia has some information on this. Another thing is that you loose the ability of having a redundancy in the switching backend. -- Adi ------------------------------------------------------------------------------------------------------------------- So if I use ggaoed and just put all 4 NICs into its config file, that should allow me to get 4Gbit of bandwidth? And no configuration is required on the switch? BTW, does 802.3ad "mode 4" use LACP? Or I am getting mixed up? The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Rob, You do know best :) 4 X RAID10 Arrays + 4 Hot Spares it is then Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 16:10 To: Jonathan Tripathy Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, It really will come down to either testing the different configs or just picking a best guess config. I would go with 4 Raid 0s since I think your load will be spread between so many VMs that in my opinion I would rather have a bit more isolation and few more baskets with less eggs in them. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 17 June 2010 15:22 To: Robert Dunkley; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Rob, Regarding the number of RAID10 array to use, do you think that 2 X RAID10 arrays (6 disks each) + 4 hot spares would be a good compramise? I''m trying to get the best IOPS for the level of hardware I''m using. Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 14:33 To: Jonathan Tripathy Cc: kriegisch@vrvis.at Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, Theoretical would be 2Gbit to the Nodes and 4Gbit to the Storage so 2 nodes could for example get 2gbit bandwidth each simultaneously but this does come with some loss in practice and additional CPU overhead. The ATA load balance type situation described in the previous email allows 1gb to any one node from a single device on the storage server but up to 4 nodes could drag 1gbit each simultaneously from the storage server. Considering how many nodes you are planning the load balanced scenario might even be preferable to LACP/802.3AD as long as the ATAoE target software can do a good job due to potentially lower CPU overhead and easier implementation of multiple switches for redundancy. None of these software decision will affect your hardware choice so its portably about time you got your hands dirty J Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 17 June 2010 14:19 To: Robert Dunkley; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Hi Rob, And if I was to use let''s say 4 teamed ports coming out of the storage server, and 2 teamed ports going into the xen node, would the max I''d get be still 1Gbit? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 14:15 To: Jonathan Tripathy Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, LACP and 802.3AD are used together on those HP Soho switches. I might be wrong but LACP I think allows automatic negotiation to some degree at the switch side. I have used LACP with Broadcom based NICs in Windows and the HP switch you are looking at. You only need to enable LACP on the switch ports plugged into your disk box and then the software on the server should be able to sort the rest (I enabled it with Broadcom NICs under Windows and it worked as advertised). Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 17 June 2010 14:07 To: Adi Kriegisch; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array ________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 14:03 To: Jonathan Tripathy Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> Looking at this page https://help.ubuntu.com/community/HighlyAvailableAoETarget > they seem to have made a linux "bond" called bond0 and are telling the AoE > target to use that. This confuses me... > Would it be of any benifit to create a "mode 4" bond and use 802.3ad with ATA > over Ethernet? Or would that be just a waste, when AoE can use the interfaces > directly?ggaoed for example can handle multiple interfaces in the configuration and is designed to deliver highest performance with for example automatically load balancing over several NICs. If you want to use vblade you might be better off using bonding because vblade cannot handle several interfaces in one instance. You''ll get another performance penalty when using several instances of vblade listening on different interfaces. I am not sure if LACP enhances performance in your case: I think from one server to the other you will only get 1GBit; for LACP to work as expected you need many-to-many or many-to-one connections. All pakets belonging to a connection will use the same wire. This article has some details: http://serverfault.com/questions/8512/multiplexed-1-gbps-ethernet also Wikipedia has some information on this. Another thing is that you loose the ability of having a redundancy in the switching backend. -- Adi ------------------------------------------------------------------------------------------------------------------- So if I use ggaoed and just put all 4 NICs into its config file, that should allow me to get 4Gbit of bandwidth? And no configuration is required on the switch? BTW, does 802.3ad "mode 4" use LACP? Or I am getting mixed up? The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sorry, I meant 3 X RAID10 (4 disks per Array) + 4 Hot Spares ________________________________ From: xen-users-bounces@lists.xensource.com on behalf of Jonathan Tripathy Sent: Thu 17/06/2010 16:12 To: Robert Dunkley; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Hi Rob, You do know best :) 4 X RAID10 Arrays + 4 Hot Spares it is then Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 16:10 To: Jonathan Tripathy Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, It really will come down to either testing the different configs or just picking a best guess config. I would go with 4 Raid 0s since I think your load will be spread between so many VMs that in my opinion I would rather have a bit more isolation and few more baskets with less eggs in them. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 17 June 2010 15:22 To: Robert Dunkley; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Rob, Regarding the number of RAID10 array to use, do you think that 2 X RAID10 arrays (6 disks each) + 4 hot spares would be a good compramise? I''m trying to get the best IOPS for the level of hardware I''m using. Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 14:33 To: Jonathan Tripathy Cc: kriegisch@vrvis.at Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, Theoretical would be 2Gbit to the Nodes and 4Gbit to the Storage so 2 nodes could for example get 2gbit bandwidth each simultaneously but this does come with some loss in practice and additional CPU overhead. The ATA load balance type situation described in the previous email allows 1gb to any one node from a single device on the storage server but up to 4 nodes could drag 1gbit each simultaneously from the storage server. Considering how many nodes you are planning the load balanced scenario might even be preferable to LACP/802.3AD as long as the ATAoE target software can do a good job due to potentially lower CPU overhead and easier implementation of multiple switches for redundancy. None of these software decision will affect your hardware choice so its portably about time you got your hands dirty J Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 17 June 2010 14:19 To: Robert Dunkley; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array Hi Rob, And if I was to use let''s say 4 teamed ports coming out of the storage server, and 2 teamed ports going into the xen node, would the max I''d get be still 1Gbit? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Thu 17/06/2010 14:15 To: Jonathan Tripathy Subject: RE: [Xen-users] RAID10 Array Hi Jonathan, LACP and 802.3AD are used together on those HP Soho switches. I might be wrong but LACP I think allows automatic negotiation to some degree at the switch side. I have used LACP with Broadcom based NICs in Windows and the HP switch you are looking at. You only need to enable LACP on the switch ports plugged into your disk box and then the software on the server should be able to sort the rest (I enabled it with Broadcom NICs under Windows and it worked as advertised). Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 17 June 2010 14:07 To: Adi Kriegisch; xen-users@lists.xensource.com Subject: RE: [Xen-users] RAID10 Array ________________________________ From: Adi Kriegisch [mailto:kriegisch@vrvis.at] Sent: Thu 17/06/2010 14:03 To: Jonathan Tripathy Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] RAID10 Array Hi!> Looking at this page https://help.ubuntu.com/community/HighlyAvailableAoETarget > they seem to have made a linux "bond" called bond0 and are telling the AoE > target to use that. This confuses me... > Would it be of any benifit to create a "mode 4" bond and use 802.3ad with ATA > over Ethernet? Or would that be just a waste, when AoE can use the interfaces > directly?ggaoed for example can handle multiple interfaces in the configuration and is designed to deliver highest performance with for example automatically load balancing over several NICs. If you want to use vblade you might be better off using bonding because vblade cannot handle several interfaces in one instance. You''ll get another performance penalty when using several instances of vblade listening on different interfaces. I am not sure if LACP enhances performance in your case: I think from one server to the other you will only get 1GBit; for LACP to work as expected you need many-to-many or many-to-one connections. All pakets belonging to a connection will use the same wire. This article has some details: http://serverfault.com/questions/8512/multiplexed-1-gbps-ethernet also Wikipedia has some information on this. Another thing is that you loose the ability of having a redundancy in the switching backend. -- Adi ------------------------------------------------------------------------------------------------------------------- So if I use ggaoed and just put all 4 NICs into its config file, that should allow me to get 4Gbit of bandwidth? And no configuration is required on the switch? BTW, does 802.3ad "mode 4" use LACP? Or I am getting mixed up? The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Hi Everyone, > > I have 3 RAID ideas, and I''d appreciate some advice on which would be > better for lots of VMs for customers. > > My storage server will be able to hold 16 disks. I am going to export 1 > iSCSI LUN to each xen node. 6 nodes will connect to one storage server, > so that''s 6 LUNs per server of equal size. The server will connect to a > switch using quad port bonded NICs (802.3ad), and each Xen node will > connect to the switch using Dual port bonded NICs. > > Idea 1: > 3 X RAID10 Arrays (4 disks per array) > 4 Hot Spares > 2 LUNs per array > > Idea 2: > 1 X RAID10 Array (12 disks per array) > 4 Hot Spares > 6 LUNs per array > > Idea 3: > 1 X RAID 10 Array (14 disks per array) > 2 Hot Spares > 6 LUNs per arrayHello, It seems to be the best solution for the global IO throughput, I used to have an Equalogic San with such an architecture, but with a Lun (or two) for every VM. regards _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 17/06/10 21:25, jpp@jppozzi.dyndns.org wrote:>> Hi Everyone, >> >> I have 3 RAID ideas, and I''d appreciate some advice on which would be >> better for lots of VMs for customers. >> >> My storage server will be able to hold 16 disks. I am going to export 1 >> iSCSI LUN to each xen node. 6 nodes will connect to one storage server, >> so that''s 6 LUNs per server of equal size. The server will connect to a >> switch using quad port bonded NICs (802.3ad), and each Xen node will >> connect to the switch using Dual port bonded NICs. >> >> Idea 1: >> 3 X RAID10 Arrays (4 disks per array) >> 4 Hot Spares >> 2 LUNs per array >> >> Idea 2: >> 1 X RAID10 Array (12 disks per array) >> 4 Hot Spares >> 6 LUNs per array >> >> Idea 3: >> 1 X RAID 10 Array (14 disks per array) >> 2 Hot Spares >> 6 LUNs per array >> > Hello, > > It seems to be the best solution for the global IO throughput, I used > to have an Equalogic San with such an architecture, but with a Lun (or > two) for every VM. > > > regards > > > >Which "Idea" above are you talking about? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thursday 17 June 2010 12:24:45 Jonathan Tripathy wrote:> > 2) How would I use 802.3ad "link agregation" with ATAoE? > > You would not. Just use (for example) 4 NICs for AoE and the kernel driver > will automatically balance your traffic over all available interfaces. > The kernel knows which interfaces can reach which from the target and does > all the balancing. > This even allows high availability when using two switches: just connect > half of the interfaces to one switch and the others to the other switch (do > the same with your storage system) and you''re done. You may just plug the > power of one switch and everything will (with a very short delay) continue > to work -- with half the bandwidth, of course. > > -- Adi > > --------------------------------------------------------------------------- > ---------------- > > Ok so I was reading http://www.howtoforge.com/ata_over_ethernet_debian_etch > > and they "export" a partition via > > vbladed 0 1 eth0 /dev/sdd5 > > If i wanted to use link agregation, what interface would I put in? Also, > would I need to set up my switch with 802.3ad/LACP "trunks". > > Thanks >I would like to point out that the same is achievable with iSCSI by means of multipathing. Admittedly, AoE is easier to setup. It is not considered enterprise grade however. But latter statements are relative to me anyway: if it works, it works. Period. Rgds, B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users