Displaying 20 results from an estimated 20000 matches similar to: "E1000 eth1 link flakiness - causes??"
2007 Aug 30
3
machine with 2 ethernet cards e1000 and forcedeth
I am using centos 5 x86_64 AMD64 X2 4200+. I am current on yum update.
My machine has two ethernet cards. e1000 (eth0) and forcedeth (eth1)
[root at fsdsigns2 ~]# more /etc/modprobe.conf
alias eth0 e1000
alias eth1 forcedeth
sometimes on boot the forcedeth driver thinks it is eth0:
[root at fsdsigns2 ~]# dmesg | grep eth
forcedeth.c: Reverse Engineered nForce ethernet driver. Version 0.60.
2007 Apr 18
1
[Bridge] Unexpected behaviour
Hi, I've set up a bridge using the 2.6.11.6 kernel. The machine is
running Debian testing with three NICs in it. eth0 is a standard 100Mb
Intel NIC, eth1 and eth2 are both Intel gigabit cards using the e1000 driver.
I tested everything at 100Mb and it worked fine. I moved the machine into
production, eth1 plugging into a dumb 100Mb D-link switch, eth2 plugging
into a shiney new Cisco
2016 May 10
1
weird network error
a previously rock solid reliable server of mine crashed last night, the
server was still running but eth0, a Intel 82574L using the e1000e
driver, went down. The server has a Supermicro X8DTE-F (dual Xeon
X5650, yada yada). server is a drbd master, so that was the first
thing to notice network issues. Just a couple days ago I ran yum
update to the latest, I do this about once a month.
2007 Dec 06
6
DomU (Centos 5) with dedicated e1000 (intel) device dropping packets
Hello everybody,
I''ve finished with pci export from DomU to Dom0 (Debian Etch) but now i
have a new problem, and a big one.
My ethernet card is dropping packets but after some time (i can''t tell
how)
It can work for a day (not in production so not hard tested) and then
all packets are dropped.
Look at the ifconfig output :
eth0 Link encap:Ethernet HWaddr
2018 May 10
6
e1000 network interface takes a long time to set the link ready
Hi,
In kubevirt, we discovered [1] that whenever e1000 is used for vNIC,
link on the interface becomes ready several seconds after 'ifup' is
executed, which for some buggy images like cirros may slow down boot
process for up to 1 minute [2]. If we switch from e1000 to virtio, the
link is brought up and ready almost immediately.
For the record, I am using the following versions:
- L0
2012 Oct 03
1
PCI Passthrough of NIC
Hello,
I have been using Xen on a Debian Lenny server for quite
some time. I decided to build a new Dom0 using identical hardware, but
newest version of Xen from repositories with Debian Squeeze.
I
attempting to create a new DomU on the new host which is similar to an
existing DomU running on the older Lenny host. The DomU is a three NIC
firewall. Two of the NICs are virtualized. One NIC is a
2015 Jul 07
2
Problems with Samba-based Home-Directory
Am 05.07.2015 um 20:52 schrieb Gordon Messmer:
> On 07/05/2015 07:57 AM, Meikel wrote:
>> Jul 5 16:36:08 meikel-pc kernel: ADDRCONF(NETDEV_UP): eth0: link is not
>> ready
>> Jul 5 16:36:23 meikel-pc kernel: ADDRCONF(NETDEV_CHANGE): eth0: link
>> becomes ready
>>
>> It takes 15 seconds between the two messages until it becomes ready. I
>> have no idea
2009 Apr 21
2
tg3 BCM5755 intermittantly stops working after upgrade to 5.3.
Dear All,
I am having a HP xw4400 with following ethernet controller
as reported by lspci
Broadcom Corporation NetXtreme BCM5755 Gigabit Ethernet PCI Express (rev 02)
This machine was running CentOS 5.2 without any problem. After
updating the machine with yum update on 8 April, after which it is showing
to be CentOS 5.3, this machine stops communicating
2012 Aug 11
7
Eth1 problem on CentOS-6.3
I am trying to transport a dd image between to hosts over a cross
linked gigabit connection. Both hosts have an eth1 configured to a
non routable ip addr on a shared network. No other devices exist on
this link.
When transferring via sftp I received a stall warning. Checking the
logs I see this:
dmesg | grep eth
e1000e 0000:00:19.0: eth0: (PCI Express:2.5GT/s:Width x1)
00:1c:c0:f2:1f:bb
2006 Jan 26
5
hosts fail to negotiate 1000Mbps speed
I am trying to connect two workstations (CentOS 3&4) directly using a
straight through cat 5e cable with a crossover adapter on one of the
ends. Both hosts have gigabit-capable ethernet card. According to
lspci host 1 has:
03:0e.0 Ethernet controller: Intel Corporation 82545EM Gigabit
Ethernet Controller (Copper) (rev 01)
and host 2 has:
05:00.0 Ethernet controller: Marvell Technology Group
2007 Dec 17
1
Bonding problem in CENTOS4
I use bonding under CENTOS4.5 x32_62.
I have these weird messages when I'm restarting network.
Do you have any ideas how to fix this?
(there is similar bug for centos5 http://bugs.centos.org/view.php?id=2404, but the author says that it worked for him in centos4...)
Thanks
Vitaly
Dec 17 08:34:21 3_10 kernel: bonding: Warning: the permanent HWaddr of eth0 - 00:1A:64:0A:DC:9C - is still in
2010 Aug 20
2
Cannot set MTU != 1500 on Intel NIC
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi list,
I have a *very* strange problem, unfortunately it's kind of a show
stopper regarding the deployment of the machine. :(
I have two Intel Gigabit Ethernet NICs on board (Supermicro-based
Server), quoting lspci (full output see at the end of the email):
0d:00.0 Ethernet controller: Intel Corporation 82573E Gigabit Ethernet
Controller
2006 Apr 15
13
htb overrate with 2.6.16
Hi
Here is something that worked with with 2.6.10-1.771_FC2smp and stopped
working when I upgraded to 2.6.16-1.2069_FC4smp.
These are fedora kernels and the network controller is an Intel Gbit
(e1000) running a 100 Mbps Full Duplex.
Don''t know how or if this matters but the 2.6.10 kernel has
CONFIG_X86_HZ=1000 and the 2.6.16 has CONFIG_HZ=250
The idea is to just shape to , say 2Mbit, a
2009 Jul 29
3
Error message whil booting system
Hi,
When system booting getting error message "modprobe: FATAL: Module
ocfs2_stackglue not found" in message. Some nodes reboot without any error
message.
-------------------------------------------------
ul 27 10:02:19 alf3 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Jul 27 10:02:19 alf3 kernel: Netfilter messages via NETLINK v0.30.
Jul 27 10:02:19 alf3 kernel:
2004 Nov 10
5
etherbridge bottleneck
I ran some iperf tests today and it looks like the etherbridge
is the limiting factor on throughput. In the beforetime, I saw great
throughput to the VMs; over 800 Mbps. With the bridge, the numbers
are in the 400s somewhere.
Is this the speed I can expect from the bridge?
Is there some tuning I should try, or another way to get more bandwidth
into the VMs?
This is with xen-2.0, 2.4.27-xen0
2019 Sep 15
2
nfsmount default timeo=7 causes timeouts on 100 Mbps
I can't explain why 700 msecs aren't enough to avoid timeouts in 100
Mbps networks, but my tests verify it, so I'm writing to the list to
request that you increase the default timeo to at least 30, or to 600
which is the default for `mount -t nfs`.
How to reproduce:
1) Cabling:
server <=> 100 Mbps switch <=> client
Alternatively, one can use a 1000 Mbps switch and
2014 Jun 17
1
CentOS 6 - Ethernet Bond Errors, 1 per frame
# modinfo ixgbe
filename:
/lib/modules/2.6.32-431.el6.x86_64/kernel/drivers/net/ixgbe/ixgbe.ko
version: 3.15.1-k
license: GPL
description: Intel(R) 10 Gigabit PCI Express Network Driver
author: Intel Corporation, <linux.nics at intel.com>
srcversion: B390E9D9904338B52C2E361
I have updated this to 3.18.7-1 as well, same results
# ifconfig bond1 |grep error
2010 Aug 19
1
dmesg- bnx2i: iSCSI not supported, dev=eth0
Getting "bnx2i: iSCSI not supported, dev=eth0" for all the NIC adapters
on all of my R710's running CentOS 5.5.
Here is an sample of the error messages:
bonding: Warning: either miimon or arp_interval and arp_ip_target module
parameters must be specified, otherwise bonding will not detect link
failures! see bonding.txt for details.
bonding: bond0: setting mode to active-backup
2011 Sep 23
4
Problems with Intel Ethernet and module e1000e
Hi all,
I'm facing a serious problem with the e100e kernel module for Intel
82574L gigabit nics on Centos 6.
The device eth0 suddenly stops working i.e. no more networking. When I
do ifconfig from console I get
eth0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:EA
inet6 addr: fe80::225:90ff:fe50:8fea/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
2012 Sep 14
1
Bonding Eth interfaces - unexpeceted results
CentOS 6.2.........
Why do the physical interfaces report (correctly) that they're negotiated at 1000Mb/s, but when I `cat /proc/net/bonding/bond0` I get 100Mbps for the member interfaces, and when I ` mii-tool bond0` I get 10Mbps for the bond?
-----------------------------------------------------------------------------------------
ethtool em1
Settings for em1:
Supported ports: [