similar to: 7.7.1908, interface bonding, and default route

Displaying 20 results from an estimated 200 matches similar to: "7.7.1908, interface bonding, and default route"

2019 Sep 20
0
7.7.1908, interface bonding, and default route
On 20/09/2019 04:55, Carlos A. Carnero Delgado wrote: > Hi! > > I just upgraded a machine to 7.7.1908 and the default route is not being > set on boot. This particular server has a bonded interface, and the > corresponding configuration for the master is ( > /etc/sysconfig/network-scripts/ifcfg-bond0): > > TYPE=Bond > BOOTPROTO=none > DEFROUTE=yes >
2016 Mar 29
2
Network bond - one port goes down from time to time
Am 28.03.16 um 12:12 schrieb Leon Fauster: > Am 28.03.2016 um 11:27 schrieb G?tz Reinicke <goetz.reinicke at filmakademie.de>: >> We have three supermicron servers with two 10Gb Ports each, connected to a cisco switch stack 1Gb ports. All are on auto speed. >> >> I configured a LACP bond on both sides on all servers, first with citrix xen server. >> >> On
2012 Sep 04
1
802.3ad + Centos 6 + KVM (bridging)
Hi all, Does any one have 802.3ad (mode 4) working on there Centos6 KVM setup? Of course we are also bridging here. - aurf
2015 Jul 09
1
Bond & Team: RX dropped packets
Hi all, we are testing CentOS 7 in order to migrate from Scientific Linux 6 / CentOS 6 and we are facing an issue with the network. Trying to configure network with teaming in activebackup mode or also with bonding in mode=1 (active backup as well) we see many RX dropped packets in the bond0 interface (around 10% of the total), 100% RX drops in the backup interface and 0% in the active interface.
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is 802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf, I never get more than a total of about 3Gbps throughput. Is there anything to tweak to get better throughput? Or am I running into other limits (e.g. was reading about tcp retransmit limits for mode 0). The iperf test was run with iperf -s on the
2017 Apr 18
2
anaconda/kickstart: bonding device not created as expected
Hi, I am currently struggling with the right way to configure a bonding device via kickstart (via PXE). I am installing servers which have "eno" network interfaces. Instead of the expected bonding device with two active slaves (bonding mode is balance-alb), I get a bonding device with only one active slave and an independent, non-bonded network device. Also the bonding device
2017 Jun 17
3
Teaming vs Bond?
I'm looking at tuning up a new site and the bonding issue came up A google search reveals that the gluster docs (and Lindsay) recommend balance-alb bonding. However, "team"ing came up which I wasn't familiar with. Its already in RH6/7 and Ubuntu and their Github page implies its stable. The libteam.org people seem to feel their solution is more lightweight and it seems easy
2016 Mar 28
4
Network bond - one port goes down from time to time
Hi, may be someone has an idea: We have three supermicron servers with two 10Gb Ports each, connected to a cisco switch stack 1Gb ports. All are on auto speed. I configured a LACP bond on both sides on all servers, first with citrix xen server. On one server eth0 goes down from time to time ? maybe within minutes, someday it is up for some hours. Two server are fine; the bond is up for 24
2018 Oct 04
3
Need help with Linux networking interfaces and NIC bonding
Hello everyone I am running into some strange issues when configuring networking interfaces on my physical server running Centos 7.5. Let me give you an overview of what's going on: We have a physical server, running CentOS 7.5. This server has one 4 port NIC and one 2 port NIC and a Dell IDRAC port. The first port of the 4 port NIC, em1, is used for Management traffic. The first port of
2016 Mar 29
0
Network bond - one port goes down from time to time
On 3/28/2016 11:44 PM, G?tz Reinicke - IT Koordinator wrote: >> How is your interface exactly configured ? > TYPE=Bond #Interface type set to bond > BOOTPROTO=static > BONDING_MASTER=yes > BONDING_OPTS="mode=4" #i set mode to active-backup > DEFROUTE=yes > IPADDR="192.168.xxx.xxx" > NETMASK=255.255.255.0 > GATEWAY="192.168.xxx.xxx"
2016 Aug 08
0
Help with Network configuration files
Hello, I?m trying to configure a CentOS 7 server to act as a host for a bunch of virtual servers (KVM). I have an 802.3ad bonded Ethernet connected to the server with a bunch of tagged VLANs. I want to be able to build a bridge interface on the server for each VLAN and then attach that to the bond interface and the virtual clients. I also want to attach a host interface to one of the VLANs
2019 Feb 06
2
Pb with bounding
Hi, We have a Dell server with 4 Ethernet interface. I would to aggregate them in a bond. Everything work but the default gateway doesn?t work on the ? bond0 ? interface and I have no links. My configuration: - CentOS 7: :/etc/sysconfig/network-scripts# uname -a Linux nas-mtd2 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux - NetworkManager disabled:
2017 Apr 19
0
anaconda/kickstart: bonding device not created as expected
On 18/04/2017 15:54, Frank Thommen wrote: > Hi, > > I am currently struggling with the right way to configure a bonding > device via kickstart (via PXE). > > I am installing servers which have "eno" network interfaces. Instead of > the expected bonding device with two active slaves (bonding mode is > balance-alb), I get a bonding device with only one active slave
2019 Sep 24
2
CO 7.7.1908 Updates not getting to mirrors?
I *know* there has been a lot going on, and congratulations on getting CentOS 8 out! But(!), I don't see any updates to CO 7.7.1908 in the "updates" directory on the mirrors I typically use. All the files date from Sept. 14th. Is something broken? -- *Matt Phelps* *Information Technology Specialist, Systems Administrator* (Computation Facility, Smithsonian Astrophysical
2019 Aug 30
0
CentOS CR Released with 7.7.1908 Packages
You guys may have noticed that the 7.7.1908 packages for the GA release have been posted to QA for all arches (x86_64, ppc64le, aarch64. ppc64, armhfp, i386). These are just the items that will be in the os/ (base) repository and not the zero day updates.? We are working on the updates now. Announcements here: https://lists.centos.org/pipermail/centos-cr-announce/2019-August/thread.html Thanks,
2019 Sep 24
0
CO 7.7.1908 Updates not getting to mirrors?
On Tue, Sep 24, 2019 at 12:08 PM Phelps, Matthew <mphelps at cfa.harvard.edu> wrote: > > I *know* there has been a lot going on, and congratulations on getting > CentOS 8 out! > > But(!), I don't see any updates to CO 7.7.1908 in the "updates" directory > on the mirrors I typically use. All the files date from Sept. 14th. > > Is something broken? >
2019 Oct 15
0
Odd issue with 7.7.1908 updated with qemu-kvm-ev
Hi, > So, I have a client that has an internal use application that needs an > ancient version of libc5.? That's not a typo; libc5.? Before the server > that ran it died about a year and a half ago (said server was an AMD > K6-2/450 with a 6GB Western Digital Caviar drive that had been spinning > nearly continuously for almost 20 years!) it was running on Red Hat > Linux
2019 Oct 15
2
Odd issue with 7.7.1908 updated with qemu-kvm-ev
So, I have a client that has an internal use application that needs an ancient version of libc5.? That's not a typo; libc5.? Before the server that ran it died about a year and a half ago (said server was an AMD K6-2/450 with a 6GB Western Digital Caviar drive that had been spinning nearly continuously for almost 20 years!) it was running on Red Hat Linux 5.2.? The last version of CentOS
2019 Sep 20
2
7.7.1908, interface bonding, and default route
El vie., 20 de sep. de 2019 a la(s) 06:16, Giles Coochey (giles at coochey.net) escribi?: > I have a similar set up to you, and just did the upgrade to 1908, I > didn't experience the problem you had, I can't see anything out of the > ordinary in your network files. > I have reviewed the configuration several times now, and still can't see if there's anything wrong
2019 Sep 20
0
7.7.1908, interface bonding, and default route
Am 2019-09-20 15:31, schrieb Carlos A. Carnero Delgado: > El vie., 20 de sep. de 2019 a la(s) 06:16, Giles Coochey > (giles at coochey.net) > escribi?: > >> I have a similar set up to you, and just did the upgrade to 1908, I >> didn't experience the problem you had, I can't see anything out of the >> ordinary in your network files. >> > > I