I currently have my 3 xen servers setup to use bonding and vlan trunking over all the available network cards in each server. Two of the three machines have 4 nics where the other has 2. Each are set to use lacp channel-protocol if that matters. What I am trying to accomplish is accessing my iSCSI san (dell md3200i) without having to drop out half of my nics from the bond. Should this be possible? The san is setup to use vlan20 where all other traffic is on vlan2-vlan17. Has anyone done this successfully or do I just need to drop out half of the nics and be done with it? Donny B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
You can bond your NICs from the dom0 to the switch, but there''s no way to do the same for the MD3200i, as far as I am aware. So your SAN traffic won''t really make use of link aggregation. For our MD3200i we dedicated two switches and two NICs per host, using multipath. That gives us high availability and 2Gbps (theoretical) bandwidth. Works great. We started with 4 NICs per host, however.> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of Donny Brooks > Sent: Wednesday, February 02, 2011 3:16 PM > To: xen-users@lists.xensource.com > Subject: [Xen-users] bonding with trunking AND iscsi > > I currently have my 3 xen servers setup to use bonding and vlantrunking over all the> available network cards in each server. Two of the three machines have4 nics where> the other has 2. Each are set to use lacp channel-protocol if thatmatters. What I am> trying to accomplish is accessing my iSCSI san (dell md3200i) withouthaving to drop> out half of my nics from the bond. Should this be possible? The san issetup to use> vlan20 where all other traffic is on vlan2-vlan17. Has anyone donethis successfully or> do I just need to drop out half of the nics and be done with it? > > Donny B. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ok, so basically give up on the shared bond and just dedicate half the nics to the SAN. Gotcha. Thanks! On 2/2/2011 3:12 PM, Jeff Sturm wrote:> You can bond your NICs from the dom0 to the switch, but there''s no way > to do the same for the MD3200i, as far as I am aware. So your SAN > traffic won''t really make use of link aggregation. > > For our MD3200i we dedicated two switches and two NICs per host, using > multipath. That gives us high availability and 2Gbps (theoretical) > bandwidth. Works great. We started with 4 NICs per host, however. > >> -----Original Message----- >> From: xen-users-bounces@lists.xensource.com [mailto:xen-users- >> bounces@lists.xensource.com] On Behalf Of Donny Brooks >> Sent: Wednesday, February 02, 2011 3:16 PM >> To: xen-users@lists.xensource.com >> Subject: [Xen-users] bonding with trunking AND iscsi >> >> I currently have my 3 xen servers setup to use bonding and vlan > trunking over all the >> available network cards in each server. Two of the three machines have > 4 nics where >> the other has 2. Each are set to use lacp channel-protocol if that > matters. What I am >> trying to accomplish is accessing my iSCSI san (dell md3200i) without > having to drop >> out half of my nics from the bond. Should this be possible? The san is > setup to use >> vlan20 where all other traffic is on vlan2-vlan17. Has anyone done > this successfully or >> do I just need to drop out half of the nics and be done with it? >> >> Donny B. >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hey Donny, as Jeff already stated, you can bond multiple NICs connected to the same switch to a single trunk/logical link and put VLAN interfaces on top of that (with some hacks - see the kernel doc on bonding). The LACP link aggregation might not scale the bandwidth as you expect it to. For each packet, the bonding code decides which slave (NIC) to use for output using a hash calculated from layer2(MAC), layer2+3(MAC+IP) or layer3+4(IP+ports). This means a single packet flow (eg I/O traffic with a single iSCSI target) from one host to another will never use more than one NIC (bandwidth). In fact, as there is no dynamic load balancing or round robin, you might even share one NIC for multiple iSCSI packet flows and have the other slaves (NICs) idle. This is why I dropped the VLAN over LACP trunk idea. ;-) Regards, Linus _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Feb 11, 2011 at 6:16 AM, Linus van Geuns <linus@vangeuns.name>wrote:> Hey Donny, > > as Jeff already stated, you can bond multiple NICs connected to the > same switch to a single trunk/logical link and put VLAN interfaces on > top of that (with some hacks - see the kernel doc on bonding). > The LACP link aggregation might not scale the bandwidth as you expect it > to. > For each packet, the bonding code decides which slave (NIC) to use for > output using a hash calculated from layer2(MAC), layer2+3(MAC+IP) or > layer3+4(IP+ports). This means a single packet flow (eg I/O traffic > with a single iSCSI target) from one host to another will never use > more than one NIC (bandwidth). > In fact, as there is no dynamic load balancing or round robin, you > might even share one NIC for multiple iSCSI packet flows and have the > other slaves (NICs) idle. > >I''m pretty sure you can choose round robin in Linux. From iputils'' README.bonding: mode Specifies one of the bonding policies. The default is balance-rr (round robin). Possible values are: balance-rr or 0 Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance. This is why I dropped the VLAN over LACP trunk idea. ;-)> >IMHO VLAN over bonding is a good idea. Just make sure you manage the setup using OS'' configuration (/etc/sysconfig/network-scripts/ifcfg-* on RH), and NOT rely on xend''s default network-bridge script. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 02/11/11 03:39, Fajar A. Nugraha wrote:> On Fri, Feb 11, 2011 at 6:16 AM, Linus van Geuns <linus@vangeuns.name > <mailto:linus@vangeuns.name>> wrote: > > Hey Donny, > > as Jeff already stated, you can bond multiple NICs connected to the > same switch to a single trunk/logical link and put VLAN interfaces on > top of that (with some hacks - see the kernel doc on bonding). > The LACP link aggregation might not scale the bandwidth as you > expect it to. > For each packet, the bonding code decides which slave (NIC) to use for > output using a hash calculated from layer2(MAC), layer2+3(MAC+IP) or > layer3+4(IP+ports). This means a single packet flow (eg I/O traffic > with a single iSCSI target) from one host to another will never use > more than one NIC (bandwidth). > In fact, as there is no dynamic load balancing or round robin, you > might even share one NIC for multiple iSCSI packet flows and have the > other slaves (NICs) idle. > > > I''m pretty sure you can choose round robin in Linux. From iputils'' > README.bonding: > > mode > > Specifies one of the bonding policies. The default is > balance-rr (round robin). Possible values are: > > balance-rr or 0 > > Round-robin policy: Transmit packets in sequential > order from the first available slave through the > last. This mode provides load balancing and fault > tolerance. > > This is why I dropped the VLAN over LACP trunk idea. ;-) > > > IMHO VLAN over bonding is a good idea. Just make sure you manage the > setup using OS'' configuration (/etc/sysconfig/network-scripts/ifcfg-* on > RH), and NOT rely on xend''s default network-bridge script. > > -- > Fajar > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-usersI use it and it provides me with almost double the bandwidth. For rr you need to use different switches do, or switch to 802.3ad mode and use a switch that supports it. B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Feb 11, 2011 at 3:39 AM, Fajar A. Nugraha <list@fajar.net> wrote: [..]> I''m pretty sure you can choose round robin in Linux. From iputils'' > README.bonding: > mode > Specifies one of the bonding policies. The default is > balance-rr (round robin). Possible values are: > balance-rr or 0 > Round-robin policy: Transmit packets in sequential > order from the first available slave through the > last. This mode provides load balancing and fault > tolerance.If you switch "mode" to LACP (802.3ad), bonding uses a hash algorithm to decide which slave to transmit any frame over. Documentation/networking/bonding.txt.gz: [..] mode Specifies one of the bonding policies. The default is balance-rr (round robin). Possible values are: [..] 802.3ad or 4 IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR policy via the xmit_hash_policy option, documented below. Note that not all transmit policies may be 802.3ad compliant, particularly in regards to the packet mis-ordering requirements of section 43.2.4 of the 802.3ad standard. Differing peer implementations will have varying tolerances for noncompliance. [..]>> >> This is why I dropped the VLAN over LACP trunk idea. ;-) >> > > IMHO VLAN over bonding is a good idea. Just make sure you manage the setup > using OS'' configuration (/etc/sysconfig/network-scripts/ifcfg-* on RH), and > NOT rely on xend''s default network-bridge script.Sure. You could, for instance, create a bond using two interfaces, each connected to a different switch, and use "active-backup" to avoid losing storage connectivity if one switch fails (resulting in loss of link). If you want to use some load balancing not recognized by any network equipment in your flow path, you should test for possible side effects first. Regards, Linus _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users