search for: e1000_watchdog

Displaying 6 results from an estimated 6 matches for "e1000_watchdog".

2007 Apr 18
1
[Bridge] Unexpected behaviour
...g the e1000 driver. I tested everything at 100Mb and it worked fine. I moved the machine into production, eth1 plugging into a dumb 100Mb D-link switch, eth2 plugging into a shiney new Cisco 2950. eth2 connects fine, giving me messages such as: Apr 18 13:58:39 portcullis kernel: e1000: eth2: e1000_watchdog: NIC Link is Down Apr 18 13:58:56 portcullis kernel: e1000: eth2: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex Apr 18 13:58:56 portcullis kernel: br0: port 2(eth2) entering learning state Apr 18 13:59:11 portcullis kernel: br0: topology change detected, propagating Apr 18 13:59:11 portcull...
2007 Aug 30
3
machine with 2 ethernet cards e1000 and forcedeth
...~]# dmesg | grep eth forcedeth.c: Reverse Engineered nForce ethernet driver. Version 0.60. forcedeth: using HIGHDMA eth0: forcedeth.c: subsystem: 01458:e000 bound to 0000:00:07.0 e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection ADDRCONF(NETDEV_UP): eth1: link is not ready e1000: eth1: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready eth0: no IPv6 routers present eth1: no IPv6 routers present however, sometimes it starts up correctly: [root at fsdsigns2 ~]# dmesg | grep eth forcedeth.c: Reverse Engineered nForce eth...
2007 Dec 06
6
DomU (Centos 5) with dedicated e1000 (intel) device dropping packets
...2006 Intel Corporation. PCI: Enabling device 0000:00:01.0 (0000 -> 0003) PCI: Setting latency timer of device 0000:00:01.0 to 64 e1000: 0000:00:01.0: e1000_probe: (PCI Express:2.5Gb/s:Width x1) xx:xx:xx:xx:xx:xx e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex NET: Registered protocol family 10 lo: Disabled Privacy Extensions IPv6 over IPv4 tunneling driver NET: Registered protocol family 5 eth0: no IPv6 routers present The interface is still up. I saw while googling that it could be an hardware problem but th...
2004 Sep 20
12
panic in e100_exec_cb()
With today''s build, my domain 0 crashes during boot when it tries to bring eth0 up (it''s an E100). cb->prev (eax) is NULL in e100_exec_cb() (e100.c:827). Just from code inspection, I don''t see how this can be. e100_alloc_cbs() was just called, which looks like it should have correctly linked up all the cb->prev/cb->next pointers. It happens regardless of
2010 Jun 03
2
Tracking down hangs
...[ 49.937739] bonding: bond0: Setting MII monitoring interval to 100. [ 49.937739] bonding: bond0: Setting up delay to 200. [ 49.937739] bonding: bond0: Setting down delay to 200. [ 49.978685] bonding: bond0: enslaving eth0 as a backup interface with a down link. [ 49.980917] e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX [ 50.014686] bonding: bond0: enslaving eth1 as a backup interface with a down link. [ 50.017352] e1000: eth1: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX [ 50.043091] bonding: bond0: link status up for interfac...
2004 Nov 10
5
etherbridge bottleneck
I ran some iperf tests today and it looks like the etherbridge is the limiting factor on throughput. In the beforetime, I saw great throughput to the VMs; over 800 Mbps. With the bridge, the numbers are in the 400s somewhere. Is this the speed I can expect from the bridge? Is there some tuning I should try, or another way to get more bandwidth into the VMs? This is with xen-2.0, 2.4.27-xen0