Displaying 20 results from an estimated 4000 matches similar to: "Bonded interfaces - testing"
2014 Jun 17
1
CentOS 6 - Ethernet Bond Errors, 1 per frame
# modinfo ixgbe
filename:
/lib/modules/2.6.32-431.el6.x86_64/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.15.1-k
license: GPL
description: Intel(R) 10 Gigabit PCI Express Network Driver
author: Intel Corporation, <linux.nics at intel.com>
srcversion: B390E9D9904338B52C2E361
I have updated this to 3.18.7-1 as well, same results
# ifconfig bond1 |grep error
2011 Feb 16
1
NIC bonding - missing eth0?
I have nic bonding (mode=802.3ad) setup on 2 servers, both running Centos 5.5
In the "Active Aggregator Info", on one reports 4 ports - which is correct - but the other only reports 3 ports.
It's always eth0 that shows a different aggregator ID. Changing the cables around so it hits a different port on the switch makes no difference. The switch is correctly configured for the port
2012 Sep 14
1
Bonding Eth interfaces - unexpeceted results
CentOS 6.2.........
Why do the physical interfaces report (correctly) that they're negotiated at 1000Mb/s, but when I `cat /proc/net/bonding/bond0` I get 100Mbps for the member interfaces, and when I ` mii-tool bond0` I get 10Mbps for the bond?
-----------------------------------------------------------------------------------------
ethtool em1
Settings for em1:
Supported ports: [
2013 Aug 12
0
bonding interface instable
Hi all,
i recently found that some frontend servers (Centos6) show:
kernel: bonding: bond0: link status definitely up for interface eth2.
kernel: bonding: bond0: link status definitely down for interface eth2, disabling it
kernel: bonding: bond0: link status definitely up for interface eth2.
On some days frequently and on others none.
But there is no hw failure or similar.
$ cat
2011 Feb 16
2
NIC bonding - missing eth0?
I have nic bonding (mode=802.3ad) setup on 2 servers, both running Centos 5.5
In the "Active Aggregator Info", on one reports 4 ports - which is correct - but the other only reports 3 ports.
It''s always eth0 that shows a different aggregator ID. Changing the cables around so it hits a different port on the switch makes no difference. The switch is correctly configured for the
2019 Sep 20
0
7.7.1908, interface bonding, and default route
On 20/09/2019 04:55, Carlos A. Carnero Delgado wrote:
> Hi!
>
> I just upgraded a machine to 7.7.1908 and the default route is not being
> set on boot. This particular server has a bonded interface, and the
> corresponding configuration for the master is (
> /etc/sysconfig/network-scripts/ifcfg-bond0):
>
> TYPE=Bond
> BOOTPROTO=none
> DEFROUTE=yes
>
2009 Oct 06
1
Bond Issues
I have a machine I just deployed w/ tg3 interfaces, I have setup bonding
on this same line of server (HP DL380 G4) a million times. I saw there were
changes recently to how you configure a bond and have my setup configured
according to: http://kbase.redhat.com/faq/docs/DOC-7431
The HP switch has a LACP trunk defined on the two ports. Problem is, when
rebooting, I need to issue a `service network
2011 Jan 17
2
nic bonding
I've just setup nic bonding on our server (DL585-G7 running Centos 5.5 x86_64) as detailed on the wiki: http://wiki.centos.org/TipsAndTricks/BondingInterfaces and all seems fine but from other "howto's" I've seen on the web, they're should be a /proc/net/bond0/info
As far as I can see, I don't have one and I'm not sure if it should be there or its absence is a
2019 Feb 06
2
Pb with bounding
Hi,
We have a Dell server with 4 Ethernet interface. I would to aggregate them in a bond. Everything work but the default gateway doesn?t work on the ? bond0 ? interface and I have no links.
My configuration:
- CentOS 7:
:/etc/sysconfig/network-scripts# uname -a
Linux nas-mtd2 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
- NetworkManager disabled:
2008 Dec 01
4
Bonding and network cards
Hi,
I have been playing this weekend with bonding on PCI netcards and found
that all of netcards I have, but old 3com, do not support MII. So
bonding is not going to happen with them. Do you have some pci netcards
supporting MII successfully running on bonging?
Thanks,
David Hrb??
2019 Sep 20
2
7.7.1908, interface bonding, and default route
Hi!
I just upgraded a machine to 7.7.1908 and the default route is not being
set on boot. This particular server has a bonded interface, and the
corresponding configuration for the master is (
/etc/sysconfig/network-scripts/ifcfg-bond0):
TYPE=Bond
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
NAME=bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=10.3.20.131
PREFIX=24
2011 Aug 26
0
Using of bonded interfaces for xen dom0 (debian)
Hello,
Where can I find a link (or docs) to *working* network config for xen 4.1.2?
My tests (s. below) were not successful.
Thank you in advance for any hints.
Regards, Mark
# ---
root@xen411dom0:~# cat /etc/network/interfaces
# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
# /usr/share/doc/ifupdown/examples for more information.
auto lo
iface lo inet loopback
auto bond0
2016 Jan 30
1
bonding (IEEE 802.3ad) not working with qemu/virtio
On 01/30/2016 07:59 AM, David Miller wrote:
> From: Nikolay Aleksandrov <nikolay at cumulusnetworks.com>
> Date: Fri, 29 Jan 2016 22:48:26 +0100
>
>> On 01/29/2016 10:45 PM, Jay Vosburgh wrote:
>>> Nikolay Aleksandrov <nikolay at cumulusnetworks.com> wrote:
>>>
>>>> On 01/25/2016 05:24 PM, Bj?rnar Ness wrote:
>>>>> As subject
2016 Jan 30
1
bonding (IEEE 802.3ad) not working with qemu/virtio
On 01/30/2016 07:59 AM, David Miller wrote:
> From: Nikolay Aleksandrov <nikolay at cumulusnetworks.com>
> Date: Fri, 29 Jan 2016 22:48:26 +0100
>
>> On 01/29/2016 10:45 PM, Jay Vosburgh wrote:
>>> Nikolay Aleksandrov <nikolay at cumulusnetworks.com> wrote:
>>>
>>>> On 01/25/2016 05:24 PM, Bj?rnar Ness wrote:
>>>>> As subject
2016 Jan 29
5
bonding (IEEE 802.3ad) not working with qemu/virtio
On 01/29/2016 10:45 PM, Jay Vosburgh wrote:
> Nikolay Aleksandrov <nikolay at cumulusnetworks.com> wrote:
>
>> On 01/25/2016 05:24 PM, Bj?rnar Ness wrote:
>>> As subject says, 802.3ad bonding is not working with virtio network model.
>>>
>>> The only errors I see is:
>>>
>>> No 802.3ad response from the link partner for any adapters
2016 Jan 29
5
bonding (IEEE 802.3ad) not working with qemu/virtio
On 01/29/2016 10:45 PM, Jay Vosburgh wrote:
> Nikolay Aleksandrov <nikolay at cumulusnetworks.com> wrote:
>
>> On 01/25/2016 05:24 PM, Bj?rnar Ness wrote:
>>> As subject says, 802.3ad bonding is not working with virtio network model.
>>>
>>> The only errors I see is:
>>>
>>> No 802.3ad response from the link partner for any adapters
2017 Mar 29
0
Mixed bonding and vlan on plain adapter possible?
Hello,
I have a CentOS 7.3 + updates server where my configuration arises from the
need to connect via iSCSI to a Dell PS Series storage array and Dell not
supporting bonding.
So that I need to use 2 nics on the same vlan to connect to iSCSI portal IP
and then use multipath.
Also, the iSCSI lan is on a dedicated vlan
I have only these 2 x 10Gbit adapters and I also need to put other vlans on
them
2015 Oct 02
0
Kickstarting bonded interfaces
Since CentOS 6.4, anaconda supports kickstarting from bonded interfaces. Has anyone managed to get this working?
Bonding modes 1, 5, 6 work fine, and they do not need any particular support on the switch. But modes 0, 2-4 are a different story, no luck here.
network --onboot yes --device bond0 --activate --bootproto static --bondslaves=eth0,eth1 --bondopts=mode=balance-rr,miimon=100 --ip 1.2.3.4
2016 Jan 30
0
bonding (IEEE 802.3ad) not working with qemu/virtio
From: Nikolay Aleksandrov <nikolay at cumulusnetworks.com>
Date: Fri, 29 Jan 2016 22:48:26 +0100
> On 01/29/2016 10:45 PM, Jay Vosburgh wrote:
>> Nikolay Aleksandrov <nikolay at cumulusnetworks.com> wrote:
>>
>>> On 01/25/2016 05:24 PM, Bj?rnar Ness wrote:
>>>> As subject says, 802.3ad bonding is not working with virtio network model.
2014 Sep 17
2
lost packets - Bond
Guys, good afternoon
I'm using in my bond interfaces as active backup, in theory, should assume an
interface (or work) only when another interface is down.
But I'm just lost packets on the interface that is not being used and is generating
packet loss on bond.
What can that be?
Follow my settings bond
[root at xxxxx ~]# ifconfig bond0 ; ifconfig eth0 ; ifconfig eth1
bond0