search for: lacp

Displaying 20 results from an estimated 132 matches for "lacp".

Did you mean: lack
2010 Feb 02
7
Help needed with zfs send/receive
...still read errors. I tried with mbuffer (which gives better performance), it didn''t get better. Today I tried with netcat (and mbuffer) and I got better throughput, but it failed at 269GB transferred. The two machines are connected to the switch with 2x1GbE (Intel) joined together with LACP. The switch logs show no errors on the ports. kstat -p | grep e1000g shows one recv error on the sending side. I can''t find anything in the logs which could give me a clue about what''s happening. I''m running build 131. If anyone has the slightest clue of where I coul...
2017 Feb 12
3
NIC Stability Problems Under Xen 4.4 / CentOS 6 / Linux 3.18
...w weeks to report on any issues. I'm hoping >> something happened between 3.18 and 4.4 that fixed underlying problems. >> >>>> Did you ever try without MTU=9000 (default 1500 instead)? >>> >>> Yes, also with all sorts of configuration combinations like LACP rate >>> slow/fast, "options ixgbe LRO=0,0" and so on. No improvement. >> >> Alright, I'll assume that probably won't help then. I tried it on one >> box which hasn't had the issue again yet, but that doesn't guarantee >> anything. > &...
2012 Aug 15
1
iscsi storage, LACP or Multipathing | Migration or rebuild?
Hi, I do have one iscsi storage with 4 GBit nics of which currently only one is configured with an ip and which is in productive use by one cent os 6.3 server. Doing some research brought me to the idea to use some nic bonding or multipathing for that storage, but multipathing I did about four or five years ago only once :) Furthormore I can't find the ultimate answer (may be there is not
2013 Oct 03
1
ixgbe/ix sysctl missing in FreeBSD 9.2
Hello everyone, I am trying to tweak some of the sysctl tunables for the ix (ixgbe) driver in FreeBSD 9.2 since I am experiencing less than ideal performance and it seems like I can't find any: # sysctl -a | grep -i ixgbe device ixgbe I am running 9.2-RC4. Any input appreciated. Thanks, -- Rumen Telbizov Unix Systems Administrator <http://telbizov.com>
2017 Jan 31
3
NIC Stability Problems Under Xen 4.4 / CentOS 6 / Linux 3.18
...ces with that 4.4 build over the next few weeks to report on any issues. I'm hoping something happened between 3.18 and 4.4 that fixed underlying problems. >> Did you ever try without MTU=9000 (default 1500 instead)? > > Yes, also with all sorts of configuration combinations like LACP rate > slow/fast, "options ixgbe LRO=0,0" and so on. No improvement. Alright, I'll assume that probably won't help then. I tried it on one box which hasn't had the issue again yet, but that doesn't guarantee anything. >> I am having certain issues on certain ha...
2014 Feb 05
4
Wait for network delay
I am running NUT as part of FreeNAS 9.2.0. I have looked through the manuals, but I don't see how to address my issue. The box has 4 NIC's in a LACP LAGG group. It takes a while for LACP to finish, and NUT tries to start before packets are flowing. NUT will complain endlessly about communication errors and never establish SNMP communication with my APC UPS. If I stop NUT and restart it after the LACP negotiation has finished, everything wor...
2016 Jan 30
1
bonding (IEEE 802.3ad) not working with qemu/virtio
...orking with virtio network model. >>>>> >>>>> The only errors I see is: >>>>> >>>>> No 802.3ad response from the link partner for any adapters in the bond. >>>>> >>>>> Dumping the network traffic shows that no LACP packets are sent from the >>>>> host running with virtio driver, changing to for example e1000 solves >>>>> this problem >>>>> with no configuration changes. >>>>> >>>>> Is this a known problem? >>>>> >>...
2016 Jan 30
1
bonding (IEEE 802.3ad) not working with qemu/virtio
...orking with virtio network model. >>>>> >>>>> The only errors I see is: >>>>> >>>>> No 802.3ad response from the link partner for any adapters in the bond. >>>>> >>>>> Dumping the network traffic shows that no LACP packets are sent from the >>>>> host running with virtio driver, changing to for example e1000 solves >>>>> this problem >>>>> with no configuration changes. >>>>> >>>>> Is this a known problem? >>>>> >>...
2008 Feb 07
2
Lustre behaviour when multiple network paths are available?
...ment where there are multiple paths to the same destination of the same length (i.e. two paths, each one hop away), which path(s) will be used for sending and receiving data? I have my cluster configured with two OSTs with two GigE NICs in each. I am seeing identical performance metrics when I use LACP to aggregate, and when I use two separate network addresses to connect them (ditto on the client side). So what I''m wondering is if I''ve hit the peak performance of my disk array, or if Lustre is just using only one path. The numbers I''m seeing in both scenarios indicate...
2016 Jan 29
5
bonding (IEEE 802.3ad) not working with qemu/virtio
...t;> As subject says, 802.3ad bonding is not working with virtio network model. >>> >>> The only errors I see is: >>> >>> No 802.3ad response from the link partner for any adapters in the bond. >>> >>> Dumping the network traffic shows that no LACP packets are sent from the >>> host running with virtio driver, changing to for example e1000 solves >>> this problem >>> with no configuration changes. >>> >>> Is this a known problem? >>> >> [Including bonding maintainers for comments] &g...
2016 Jan 29
5
bonding (IEEE 802.3ad) not working with qemu/virtio
...t;> As subject says, 802.3ad bonding is not working with virtio network model. >>> >>> The only errors I see is: >>> >>> No 802.3ad response from the link partner for any adapters in the bond. >>> >>> Dumping the network traffic shows that no LACP packets are sent from the >>> host running with virtio driver, changing to for example e1000 solves >>> this problem >>> with no configuration changes. >>> >>> Is this a known problem? >>> >> [Including bonding maintainers for comments] &g...
2017 Jan 27
5
NIC Stability Problems Under Xen 4.4 / CentOS 6 / Linux 3.18
...command line. Are there other kernel options that might be useful to try? > Are the devices connected to the same network infrastructure? There are two onboard NICs and two NICs on a dual-port card in each server. All devices connect to a cisco switch pair in VSS and the links are paired in LACP. > There has to be something common. The NICs having issues are running a native VLAN, a tagged VLAN, iSCSI and NFS traffic, as well as some basic management stuff over SSH, and they are configured with an MTU of 9000 on the native VLAN. It's a lot of features, but I can't really turn...
2017 Feb 10
0
NIC Stability Problems Under Xen 4.4 / CentOS 6 / Linux 3.18
...ld over the next few weeks to report on any issues. I'm hoping > something happened between 3.18 and 4.4 that fixed underlying problems. > >>> Did you ever try without MTU=9000 (default 1500 instead)? >> >> Yes, also with all sorts of configuration combinations like LACP rate >> slow/fast, "options ixgbe LRO=0,0" and so on. No improvement. > > Alright, I'll assume that probably won't help then. I tried it on one > box which hasn't had the issue again yet, but that doesn't guarantee > anything. I was able to discover so...
2016 Mar 28
4
Network bond - one port goes down from time to time
Hi, may be someone has an idea: We have three supermicron servers with two 10Gb Ports each, connected to a cisco switch stack 1Gb ports. All are on auto speed. I configured a LACP bond on both sides on all servers, first with citrix xen server. On one server eth0 goes down from time to time ? maybe within minutes, someday it is up for some hours. Two server are fine; the bond is up for 24 days(!) now without any problem. Recently I installed centos 7.2 on that server in...
2017 Feb 13
0
NIC Stability Problems Under Xen 4.4 / CentOS 6 / Linux 3.18
...any issues. I'm hoping >>> something happened between 3.18 and 4.4 that fixed underlying problems. >>> >>>>> Did you ever try without MTU=9000 (default 1500 instead)? >>>> >>>> Yes, also with all sorts of configuration combinations like LACP rate >>>> slow/fast, "options ixgbe LRO=0,0" and so on. No improvement. >>> >>> Alright, I'll assume that probably won't help then. I tried it on one >>> box which hasn't had the issue again yet, but that doesn't guarantee >>&g...
2016 Apr 06
3
KVM Virtualization Network VLAN CentOS7
Im configuring my KVM and in my network configuration I have 4 Network lancard 2 nic = using teaming0 for management with access port configured in the switch side 2 nic = using teaming1 for guest VM DATA ports. and in the switch is configured for LACP with trunk allowing vlan 10,20,30,40,50 and configured in the CentOS7 the vlan 10, vlan 20,30,40,50 im sure its already working because I tried to use one vlan and ping was successful. my question is can I assigned directly the 'team1_vlan10, team1_vlan20.. and so on to directly use in my gue...
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output. For the moment I; zfs send > pigz scp arcfour the file gz file to the
2017 Jan 30
1
NIC Stability Problems Under Xen 4.4 / CentOS 6 / Linux 3.18
...h6, disabling it [...] repeated every second or so. >> Are the devices connected to the same network infrastructure? > > There are two onboard NICs and two NICs on a dual-port card in each > server. All devices connect to a cisco switch pair in VSS and the links > are paired in LACP. We've been experienced ixgbe stability issues on CentOS 6.x with various 3.x kernels for years with different ixgbe driver versions and, to date, the only way to completely get rid of the issue was to switch from Intel to Broadcom. Just like in your case, the problem pops up randomly and...
2023 Apr 17
1
[Bug 5124] Parallelize the rsync run using multiple threads and/or connections
https://bugzilla.samba.org/show_bug.cgi?id=5124 --- Comment #12 from Paulo Marques <paulo.marques at bitfile.pt> --- Using multiple connections also helps when you have LACP network links, which are relatively common in data center setups to have both redundancy and increased bandwidth. If you have two 1Gbps links aggregated, you can only use 1Gbps using rsync, but you could use 2Gbps if rsync made several connections from different TCP ports. -- You are receiving t...
2011 Nov 14
1
bond of bonds
Hi, I am trying to use Link-aggregation with redundancy between switches that doesnot support SMLT in switches. I have 4 network ports. First two are connected to a switch and LACP/LAG is enabled. Third and Fourth ports connect to another switch with another LAG group. I was thinking create two mode 4 bonds and bond those bonds to mode 1 (Active/Passive) bond. But it seems this is not supported yet in kernel (?). How do you guys handle this kind of situation. (And yes witho...