Hi all, Read many posts on the subject. Using 802.3ad. Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway. When cold booting it somewhat works, some hosts are pingable while others are not. When restarting the network service via /etc/init.d/network, nothing is pingable. Here are my configs; ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no /etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100 Bonding worked great in Centos 5.x, not so well for me in Centos 6.2. My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work! Any guidance is golden. - aurf
SORRY typo;> options bond0 mode=5 miimon=100is really options bond0 mode=4 miimon=100 On May 13, 2012, at 11:45 AM, aurfalien wrote:> Hi all, > > Read many posts on the subject. > > Using 802.3ad. > > Few problems; > Cannot ping some hosts on the network, they are all up. > Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. > Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway. > > When cold booting it somewhat works, some hosts are pingable while others are not. > > When restarting the network service via /etc/init.d/network, nothing is pingable. > > Here are my configs; > > ifcfg-bond0 > DEVICE=bond0 > USERCTL=no > BOOTPROTO=none > ONBOOT=yes > IPADDR=10.0.0.10 > NETMASK=255.255.0.0 > NETWORK=10.0.0.0 > TYPE=Unknown > IPV6INIT=no > > ifcfg-eth0 > DEVICE=eth0 > BOOTPROTO=none > ONBOOT=yes > MASTER=bond0 > SLAVE=yes > USERCTL=no > > ifcfg-eth1 > DEVICE=eth1 > BOOTPROTO=none > ONBOOT=yes > MASTER=bond0 > SLAVE=yes > USERCTL=no > > /etc/modprob.d/bonding.conf > alias bond0 bonding > options bond0 mode=5 miimon=100 > > Bonding worked great in Centos 5.x, not so well for me in Centos 6.2. > > My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work! > > Any guidance is golden. > > - aurf >
On 5/13/2012 11:45 AM, aurfalien wrote:> Hi all, > > Read many posts on the subject. > > Using 802.3ad. > > Few problems; > Cannot ping some hosts on the network, they are all up. > Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. > Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway. > > When cold booting it somewhat works, some hosts are pingable while others are not. > > When restarting the network service via /etc/init.d/network, nothing is pingable. > > Here are my configs; > > ifcfg-bond0 > DEVICE=bond0 > USERCTL=no > BOOTPROTO=none > ONBOOT=yes > IPADDR=10.0.0.10 > NETMASK=255.255.0.0 > NETWORK=10.0.0.0 > TYPE=Unknown > IPV6INIT=no > > ifcfg-eth0 > DEVICE=eth0 > BOOTPROTO=none > ONBOOT=yes > MASTER=bond0 > SLAVE=yes > USERCTL=no > > ifcfg-eth1 > DEVICE=eth1 > BOOTPROTO=none > ONBOOT=yes > MASTER=bond0 > SLAVE=yes > USERCTL=no > > /etc/modprob.d/bonding.conf > alias bond0 bonding > options bond0 mode=5 miimon=100 > > Bonding worked great in Centos 5.x, not so well for me in Centos 6.2. > > My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work! > > Any guidance is golden. > > - aurf > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > >I spent two months on bonding two nics inside a box to a bridge in the box. There is a bug, very prominent in fedora bugzillas about it. You cannot do it without some modification. Libvirt loses some vms, no way to make it work that I know of except for those suggested changes which I did not try. if you look in your libvirt logs you will see xml bond errors....and thus impossible to do inside of the box. this only applies if the bonded nics and bridge are all in the same box, also, the options should no longer go in bonding.conf, but in the bridge file itself. in all my testing all vms worked except the one assigned vnet0, that always got 'lost'... however, any attempt by the vm to send a signal outside to the net, would cause it to be found again.. this bug is not fixed in 6 or in latest fedora when i last checked....there are self made patches in fedora bugzilla though.
On Sun, May 13, 2012 at 5:45 PM, aurfalien <aurfalien at gmail.com> wrote:> Hi all, > > Read many posts on the subject. > > Using 802.3ad. > > Few problems; > Cannot ping some hosts on the network, they are all up. > Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. > Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway. > > When cold booting it somewhat works, some hosts are pingable while others are not. > > When restarting the network service via /etc/init.d/network, nothing is pingable. > > Here are my configs; > > ifcfg-bond0 > DEVICE=bond0 > USERCTL=no > BOOTPROTO=none > ONBOOT=yes > IPADDR=10.0.0.10 > NETMASK=255.255.0.0 > NETWORK=10.0.0.0 > TYPE=Unknown > IPV6INIT=noNote I'm speaking bonding only and not bridging here: These days bonding is supposed to be done in the network-script files, not modprobe.conf: # ifcfg-bond0: DEVICE=bond0 IPADDR=10.0.0.6 NETMASK=255.255.255.0 #NETWORK#BROADCASTONBOOT=yes BOOTPROTO=none USERCTL=no BONDING_OPTS="mode=active-backup primary=em1 arp_interval=2000 arp_ip_target=10.0.0.1 arp_validate=all num_grat_arp=12 primary_reselect=failure" Adjust accordingly. -- Mikael.
On 05/13/2012 11:45 AM, aurfalien wrote:> Hi all, > > Read many posts on the subject. > > Using 802.3ad. > > Few problems; > Cannot ping some hosts on the network, they are all up. > Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. > Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway. > > When cold booting it somewhat works, some hosts are pingable while others are not. > > When restarting the network service via /etc/init.d/network, nothing is pingable. > > Here are my configs; > > ifcfg-bond0 > DEVICE=bond0 > USERCTL=no > BOOTPROTO=none > ONBOOT=yes > IPADDR=10.0.0.10 > NETMASK=255.255.0.0 > NETWORK=10.0.0.0 > TYPE=Unknown > IPV6INIT=no > > ifcfg-eth0 > DEVICE=eth0 > BOOTPROTO=none > ONBOOT=yes > MASTER=bond0 > SLAVE=yes > USERCTL=no > > ifcfg-eth1 > DEVICE=eth1 > BOOTPROTO=none > ONBOOT=yes > MASTER=bond0 > SLAVE=yes > USERCTL=no > > /etc/modprob.d/bonding.conf > alias bond0 bonding > options bond0 mode=5 miimon=100 > > Bonding worked great in Centos 5.x, not so well for me in Centos 6.2. > > My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work! > > Any guidance is golden. > > - aurfI run KVM VMs, built and managed using libvirt, through bonded interfaces all the time. I don't have a specific tutorial for this, but I cover all the steps to build a mode=1 (Active/Passive) bond and then routing VMs through it as part of a larger tutorial. Here are the specific sections I think will help you; https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Network This section covers building 3 bonds, which you only need one of. In the tutorial, you only need to care about the "IFN" bond and bridge (bond2 + vbr2). https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Provisioning_vm0001-dev This covers all the steps used in the 'virt-install' call to provision the VMs, which includes telling them to use the bridge. Hope that helps. -- Digimer Papers and Projects: https://alteeve.com