SilverTip257
2012-Sep-06 16:19 UTC
[CentOS-virt] [Advice] CentOS6 + KVM + bonding + bridging
With the current talk on bonding, I have a few questions of my own. I'm setting up a KVM host with CentOS 6.3 x86_64 on which I'd like to attach the VMs to a bonded interface. My target setup is one where two GigE NICs are bonded and then the KVM bridge interface is attached to the bonded interface. Initially I tried to use the balance-alb mode (mode6), but had little luck (receiving traffic on the bond appeared to be non-functional from the perspective of a VM). After some reading [0] [1] - I switched the mode to balance-tlb (mode5) and hosts are now reachable. See bottom of [0] for a note on "known ARP problem for bridge on a bonded interface". I'd prefer mode5 or 6 since it would balance between my slave interfaces and need not worry about 802.3ad support (mode4) on the switch this host will be connected to. But the way it seems mode 6 isn't going to work out for me. (Maybe experimenting with mode4 is the way to go.) [0] http://www.linux-kvm.org/page/HOWTO_BONDING [1] https://lists.linux-foundation.org/pipermail/bridge/2007-April/005376.html My question to the members of this list is what bonding mode(s) are you using for a high availability setup? I welcome any advice/tips/gotchas on bridging to a bonded interface. Thanks! ---~~.~~--- Mike // SilverTip257 //
Dennis Jacobfeuerborn
2012-Sep-06 16:28 UTC
[CentOS-virt] [Advice] CentOS6 + KVM + bonding + bridging
On 09/06/2012 06:19 PM, SilverTip257 wrote:> With the current talk on bonding, I have a few questions of my own. > > > I'm setting up a KVM host with CentOS 6.3 x86_64 on which I'd like to > attach the VMs to a bonded interface. > My target setup is one where two GigE NICs are bonded and then the KVM > bridge interface is attached to the bonded interface. > > Initially I tried to use the balance-alb mode (mode6), but had little > luck (receiving traffic on the bond appeared to be non-functional from > the perspective of a VM). After some reading [0] [1] - I switched the > mode to balance-tlb (mode5) and hosts are now reachable. > > See bottom of [0] for a note on "known ARP problem for bridge on a > bonded interface". > > I'd prefer mode5 or 6 since it would balance between my slave > interfaces and need not worry about 802.3ad support (mode4) on the > switch this host will be connected to. But the way it seems mode 6 > isn't going to work out for me. (Maybe experimenting with mode4 is > the way to go.) > > [0] http://www.linux-kvm.org/page/HOWTO_BONDING > [1] https://lists.linux-foundation.org/pipermail/bridge/2007-April/005376.html > > > My question to the members of this list is what bonding mode(s) are > you using for a high availability setup? > I welcome any advice/tips/gotchas on bridging to a bonded interface.You probably either want to use Centos 6.2 or wait for 6.4. Apparently there have been some changes in the network device infrastructure with 6.3 kernels which resulted in bonding issues especially when used with vlan tagging. I've been bitten by this and these issues have been addressed on the red hat bugzilla but it's not entirely clear which kernel contains all the final fixes. Regards, Dennis
Philip Durbin
2012-Sep-06 20:35 UTC
[CentOS-virt] [Advice] CentOS6 + KVM + bonding + bridging
On 09/06/2012 12:19 PM, SilverTip257 wrote:> My question to the members of this list is what bonding mode(s) are > you using for a high availability setup? > I welcome any advice/tips/gotchas on bridging to a bonded interface.I'm not sure I'd call this high availability... but here's an example of bonding two ethernet ports (eth0 and eth1) together into a bond (mode 4) and then setting up a bridge for a VLAN (id 375) that some VMs can run on: [root at kvm01a network-scripts]# grep -iv hwadd ifcfg-eth0 DEVICE=eth0 SLAVE=yes MASTER=bond0 [root at kvm01a network-scripts]# grep -iv hwadd ifcfg-eth1 DEVICE=eth1 SLAVE=yes MASTER=bond0 [root at kvm01a network-scripts]# cat ifcfg-bond0 | sed 's/[1-9]/x/g' DEVICE=bond0 ONBOOT=yes BOOTPROTO=static IPADDR=x0.xxx.xx.xx NETMASK=xxx.xxx.xxx.0 DNSx=xx0.xxx.xxx.xxx DNSx=x0.xxx.xx.xx DNSx=x0.xxx.xx.x0 [root at kvm01a network-scripts]# cat ifcfg-br375 DEVICE=br375 BOOTPROTO=none TYPE=Bridge ONBOOT=yes [root at kvm01a network-scripts]# cat ifcfg-bond0.375 DEVICE=bond0.375 BOOTPROTO=none ONBOOT=yes VLAN=yes BRIDGE=br375 [root at kvm01a network-scripts]# cat /etc/modprobe.d/local.conf alias bond0 bonding options bonding mode=4 miimon=100 [root at kvm01a network-scripts]# grep Mode /proc/net/bonding/bond0 Bonding Mode: IEEE 802.3ad Dynamic link aggregation [root at kvm01a network-scripts]# egrep '^V|375' /proc/net/vlan/config VLAN Dev name | VLAN ID bond0.375 | 375 | bond0 Repeat ad nauseam for the other VLANs you want to put VMs on (assuming your switch is trunking them to your hypervisor). See also http://backdrift.org/howtonetworkbonding via http://irclog.perlgeek.de/crimsonfu/2012-08-15#i_5900501 Phil