Displaying 6 results from an estimated 6 matches for "xmit_hash_policy".
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is
802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf,
I never get more than a total of about 3Gbps throughput. Is there anything
to tweak to get better throughput? Or am I running into other limits (e.g.
was reading about tcp retransmit limits for mode 0).
The iperf test was run with iperf -s on the s...
2019 Sep 20
2
7.7.1908, interface bonding, and default route
...for the master is (
/etc/sysconfig/network-scripts/ifcfg-bond0):
TYPE=Bond
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
NAME=bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=10.3.20.131
PREFIX=24
GATEWAY=10.3.20.1
DNS1=10.3.2.8
BONDING_MASTER=yes
BONDING_OPTS="mode=802.3ad xmit_hash_policy=layer2 miimon=100"
The slaves (two of them) are configured like
TYPE=Ethernet
BOOTPROTO=none
NAME=bond0-slave0
DEVICE=em3
ONBOOT=yes
MASTER=bond0
SLAVE=yes
After booting, the routing table is
10.3.20.0/24 dev bond0 proto kernel scope link src 10.3.20.131 metric 300
with no...
2019 Sep 20
0
7.7.1908, interface bonding, and default route
...=Bond
> BOOTPROTO=none
> DEFROUTE=yes
> IPV4_FAILURE_FATAL=yes
> NAME=bond0
> DEVICE=bond0
> ONBOOT=yes
> IPADDR=10.3.20.131
> PREFIX=24
> GATEWAY=10.3.20.1
> DNS1=10.3.2.8
> BONDING_MASTER=yes
> BONDING_OPTS="mode=802.3ad xmit_hash_policy=layer2 miimon=100"
>
> The slaves (two of them) are configured like
>
> TYPE=Ethernet
> BOOTPROTO=none
> NAME=bond0-slave0
> DEVICE=em3
> ONBOOT=yes
> MASTER=bond0
> SLAVE=yes
>
> After booting, the routing table is
>
> 10.3.20....
2017 Jun 17
3
Teaming vs Bond?
I'm looking at tuning up a new site and the bonding issue came up
A google search reveals that the gluster docs (and Lindsay) recommend
balance-alb bonding.
However, "team"ing came up which I wasn't familiar with. Its already in
RH6/7 and Ubuntu and their Github page implies its stable.
The libteam.org people seem to feel their solution is more lightweight
and it seems easy
2018 May 23
0
Unable to connect VMs to a bridge over bonded network on Debian 9 (works fine on Centos 7.4)
....62
#netmask 255.255.255.0
#gateway 192.168.1.1
#dns-nameservers 192.168.1.241 192.168.1.104
# mtu 9000
auto enp1s0f1
iface enp1s0f1 inet manual
bond-master bond0
# mtu 9000
auto bond0
iface bond0 inet static
bond-miimon 100
bond-mode 6
bond-updelay 200
bond-xmit_hash_policy layer3+4
bond-lacp-rate 1
# mtu 9000
# address 192.168.1.62
# netmask 255.255.255.0
# gateway 192.168.1.1
# dns-nameservers 192.168.1.241 192.168.1.104
slaves eno1 enp1s0f1
auto br0
iface br0 inet static
address 192.168.1.62
netmask 255.255.255.0
gateway 192.168.1.1
dns-namese...
2012 Sep 04
1
802.3ad + Centos 6 + KVM (bridging)
Hi all,
Does any one have 802.3ad (mode 4) working on there Centos6 KVM setup?
Of course we are also bridging here.
- aurf