Displaying 14 results from an estimated 14 matches for "e1000_probe".
Did you mean:
e100_probe
2007 Aug 30
3
machine with 2 ethernet cards e1000 and forcedeth
...e1000
alias eth1 forcedeth
sometimes on boot the forcedeth driver thinks it is eth0:
[root at fsdsigns2 ~]# dmesg | grep eth
forcedeth.c: Reverse Engineered nForce ethernet driver. Version 0.60.
forcedeth: using HIGHDMA
eth0: forcedeth.c: subsystem: 01458:e000 bound to 0000:00:07.0
e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection
ADDRCONF(NETDEV_UP): eth1: link is not ready
e1000: eth1: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow
Control: RX/TX
ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
eth0: no IPv6 routers present
eth1: no IPv6 routers present
however, somet...
2007 Dec 06
6
DomU (Centos 5) with dedicated e1000 (intel) device dropping packets
...tes before, i realised
that it was down.
Dmesg output :
...
Intel(R) PRO/1000 Network Driver - version 7.1.9-k4-NAPI
Copyright (c) 1999-2006 Intel Corporation.
PCI: Enabling device 0000:00:01.0 (0000 -> 0003)
PCI: Setting latency timer of device 0000:00:01.0 to 64
e1000: 0000:00:01.0: e1000_probe: (PCI Express:2.5Gb/s:Width x1)
xx:xx:xx:xx:xx:xx
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
NET: Registered protocol family 10
lo: Disabled Privacy Extensions
IPv6 over IPv4 tunneling driver
NET: Registered...
2005 Dec 31
1
RE: Intel Corporation 82573V Gigabit Ethernet Controller
...PRO/1000 Network Driver - version 6.0.54-k2-NAPI
> Copyright (c) 1999-2004 Intel Corporation.
> ACPI: PCI interrupt 0000:03:00.0[A] -> GSI 10 (level, low) -> IRQ 10
> PCI: Setting latency timer of device 0000:03:00.0 to 64
> divert: allocating divert_blk for eth0
> e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
> ACPI: PCI interrupt 0000:04:00.0[A] -> GSI 11 (level, low) -> IRQ 11
> PCI: Setting latency timer of device 0000:04:00.0 to 64
> e1000: 0000:04:00.0: e1000_probe: The EEPROM Checksum Is Not Valid
> e1000: probe of 0000:04:00.0 failed with er...
2009 Jun 18
1
intel nic vanished with 5.3
Hi
I have boxes with a quad card that shows up with
e1000 e1000_probe: Intel(R) PRO/1000 Network Connection
However since rebuilding a box from 4.7 to 5.3 this card has vanished -
I would have thought this card is pretty generic so i dont believe there
are not drivers for it -
Any other thoughts? It does not show up at all in messages etc, and
ethtool knows noth...
2008 Dec 10
0
domU, Failed to obtain physical IRQ, e1000 Intel NIC
...x: Unregistering netfilter hooks
audit(1228931989.175:2): selinux=0 auid=4294967295
input: PC Speaker as /class/input/input1
Intel(R) PRO/1000 Network Driver - version 7.3.20-k2-NAPI
Copyright (c) 1999-2006 Intel Corporation.
PCI: Enabling device 0000:00:00.0 (0000 -> 0003)
e1000: 0000:00:00.0: e1000_probe: (PCI-X:133MHz:64-bit)
00:11:xx:xx:xx:xx
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
PCI: Enabling device 0000:00:00.1 (0000 -> 0003)
e1000: 0000:00:00.1: e1000_probe: (PCI-X:133MHz:64-bit)
00:11:xx:xx:xx:xy
e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection
device...
2008 Mar 27
0
Pci Export to get usb Dongle working under DomU (Xen 3.2)
...new driver hub
Intel(R) PRO/1000 Network Driver - version 7.1.9-k4-NAPI
Copyright (c) 1999-2006 Intel Corporation.
PCI: Enabling device 0000:00:01.0 (0000 -> 0003)
PCI: Setting latency timer of device 0000:00:01.0 to 64
USB Universal Host Controller Interface driver v3.0
e1000: 0000:00:01.0: e1000_probe: (PCI Express:2.5Gb/s:Width
x1) 00:13:72:0f:xx:xx
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
PCI: Enabling device 0000:00:00.0 (0000 -> 0001)
PCI: Setting latency timer of device 0000:00:00.0 to 64
uhci_hcd 0000:00:00.0: UHCI Host Controller
uhci_hcd 0000:00:00.0: new USB...
2006 Oct 16
1
Uneven CPU speed with CentOS 4.4 on a Mac Pro
Hi list,
I've recently managed to install CentOS 4.4 on an Apple Mac Pro.
Functionality-wise everything works great, but when trying to benchmark
the system, I don't get stable runtimes, they differ by more than 30%.
For example a benchmark run with our PDE solving code takes between 500
and 800 s on the completely unloaded machine.
Suspicious kernel output (complete output attached):
2004 Jul 26
0
FW: IA64 test report: 2.6.8-rc1 /tiger 2004-7-20: Boot Hang!
...s a 16550A
RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
loop: loaded (max 8 devices)
Intel(R) PRO/1000 Network Driver - version 5.2.52-k4
Copyright (c) 1999-2004 Intel Corporation.
ACPI: PCI interrupt 0000:01:00.0[A] -> GSI 18 (level, low) -> IRQ 51
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
ACPI: PCI interrupt 0000:12:01.0[A] -> GSI 120 (level, low) -> IRQ 59
e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection
Ethernet Channel Bonding Driver: v2.6.0 (January 14, 2004)
bonding: Warning: either miimon or arp_interval and arp_ip_...
2010 Mar 20
1
Error: ramdisk
...(v 3.6.10)
Intel(R) PRO/1000 Network Driver - version 7.1.9-k4
Copyright (c) 1999-2006 Intel Corporation.
(XEN) PCI add device 00:03.0
ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
ACPI: PCI Interrupt 0000:00:03.0[A] -> Link [LNKC] -> GSI 11 (level, high) -> IRQ 11
e1000: 0000:00:03.0: e1000_probe: (PCI:33MHz:32-bit) 52:46:03:00:04:01
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
pcnet32.c:v1.32 18.Mar.2006 tsbogend@alpha.franken.de
e100: Intel(R) PRO/100 Network Driver, 3.5.10-k2-NAPI
e100: Copyright(c) 1999-2005 Intel Corporation
(XEN) PCI add device 00:08.0
ACPI: PCI Inte...
2010 Mar 20
1
Error: ramdisk
...(v 3.6.10)
Intel(R) PRO/1000 Network Driver - version 7.1.9-k4
Copyright (c) 1999-2006 Intel Corporation.
(XEN) PCI add device 00:03.0
ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
ACPI: PCI Interrupt 0000:00:03.0[A] -> Link [LNKC] -> GSI 11 (level, high) -> IRQ 11
e1000: 0000:00:03.0: e1000_probe: (PCI:33MHz:32-bit) 52:46:03:00:04:01
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
pcnet32.c:v1.32 18.Mar.2006 tsbogend@alpha.franken.de
e100: Intel(R) PRO/100 Network Driver, 3.5.10-k2-NAPI
e100: Copyright(c) 1999-2005 Intel Corporation
(XEN) PCI add device 00:08.0
ACPI: PCI Inte...
2005 Nov 08
2
Maybe a bug of xen
Hi!
Maybe, I found a bug of xen.
My system:
- domain0 - gentoo - xen-devel-3.0, kernel 2.6.12.5-r1
- domain1 - debian - kernel 2.6.12.5-r1 (2 interface: vif1.1 = eth0
(0.0.0.0), vif1.2 = eth1 (10.0.1.1 + gw 10.0.1.2))
Bridge:
xen-br0 (config as 10.0.1.2) include (vif1.2, vif0.0)
xen-br1 (config as 0.0.0.0) include (vif1.1, peth0)
On server domain1 I use a pppoe server.
If a send a pppoe request
2010 Jun 03
2
Tracking down hangs
...3510] ata38: SATA max UDMA/133 mmio m1048576 at 0xfe300000 port 0xfe334000 irq 68
[ 26.993510] ata39: SATA max UDMA/133 mmio m1048576 at 0xfe300000 port 0xfe336000 irq 68
[ 26.993510] ata40: SATA max UDMA/133 mmio m1048576 at 0xfe300000 port 0xfe338000 irq 68
[ 27.739967] e1000: 0000:07:01.0: e1000_probe: (PCI-X:133MHz:64-bit) 00:14:4f:21:19:f8
[ 27.778383] e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
[ 27.778399] ACPI: PCI Interrupt 0000:07:01.1[B] -> GSI 53 (level, low) -> IRQ 53
[ 27.964222] ata33: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 27.996239] ata3...
2005 Aug 31
0
Problems creating DomUs with large memory system/PAE enabled
...driver initialized: 16 RAM disks of 4096K size 1024 blocksize
loop: loaded (max 8 devices)
HP CISS Driver (v 2.6.6)
Intel(R) PRO/1000 Network Driver - version 6.0.54-k2
Copyright (c) 1999-2004 Intel Corporation.
ACPI: PCI Interrupt 0000:18:01.0[A] -> GSI 72 (level, low) -> IRQ 16
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
ACPI: PCI Interrupt 0000:18:01.1[B] -> GSI 73 (level, low) -> IRQ 17
e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection
pcnet32.c:v1.30j 29.04.2005 tsbogend@alpha.franken.de
e100: Intel(R) PRO/100 Network Driver, 3.4.8-k2-NAPI
e100: Copyright(...
2004 Nov 10
5
etherbridge bottleneck
I ran some iperf tests today and it looks like the etherbridge
is the limiting factor on throughput. In the beforetime, I saw great
throughput to the VMs; over 800 Mbps. With the bridge, the numbers
are in the 400s somewhere.
Is this the speed I can expect from the bridge?
Is there some tuning I should try, or another way to get more bandwidth
into the VMs?
This is with xen-2.0, 2.4.27-xen0