similar to: Intel 1000/PRO GT (e1000 driver) and "Detect Tx Unit

Displaying 20 results from an estimated 10000 matches similar to: "Intel 1000/PRO GT (e1000 driver) and "Detect Tx Unit"

2007 Nov 09
2
Intel 1000/PRO GT (e1000 driver) and "Detect Tx Unit Hang" error with 4GB RAM
My system configuration: ASUS M2A-VM motherboard AMD Athlon 64 X2 4200+ 2.2 GHz 4x A-DATA 1GB DDR2 800 memory 2x Intel 10/100/1000 Pro/1000 GT Desktop Network Adapter 2x Seagate Barracuda 250GB HD (RAID 1, software RAID) CentOS 5 x86_64; Kernel 2.6.23 (custom built); Version 7.6.9.2 e1000 driver The symptoms of this problem are outlined at: http://e1000.sourceforge.net/wiki/index.php/Issues[1]
2020 Jan 17
3
Centos 8 and E1000 intel driver
Folks I know that support for the network adaptors supported by the 'e1000' driver have been removed from the base distribution. However, I have exactly that controller (Broadcom Gigabit Ethernet PCI, not PCIe). Is there a way for me to add support for that on Centos 8.1? Perhaps a driver in an RPM package? Thanks David
2020 Jan 17
0
Centos 8 and E1000 intel driver
On Fri, Jan 17, 2020 at 3:16 PM david <david at daku.org> wrote: > > Folks > > I know that support for the network adaptors supported by the 'e1000' > driver have been removed from the base distribution. However, I have > exactly that controller (Broadcom Gigabit Ethernet PCI, not > PCIe). Is there a way for me to add support for that on Centos > 8.1?
2020 Jan 18
0
Centos 8 and E1000 intel driver
At 03:27 PM 1/17/2020, Akemi Yagi wrote: >On Fri, Jan 17, 2020 at 3:16 PM david <david at daku.org> wrote: > > > > Folks > > > > I know that support for the network adaptors supported by the 'e1000' > > driver have been removed from the base distribution. However, I have > > exactly that controller (Broadcom Gigabit Ethernet PCI, not > >
2009 Mar 21
0
Bug#520629: xen-hypervisor-3.2-1-amd64: Intel e1000 network card emulation
Package: xen-hypervisor-3.2-1-amd64 Version: 3.2.1-2 Severity: normal Hi, Is the e1000 network card within domU no longer available? With model=e1000 a Xen domain refuses to start while with etch this was possible. There are some Windows guests on which I can not install gplpv, so with lenny they will just get 100mbit ethernet emulated. Cheers, Andreas -- System Information: Debian Release:
2008 Dec 22
1
Supermicro and onboard Intel e1000 Ethernet controllers ... no longer an issue?
[This email is either empty or too large to be displayed at this time]
2010 May 20
7
[pv_ops] e1000e: "Detected Tx Unit Hang"
Hello, my server has massive problems with my NIC. I got: "Detected Tx Unit Hang". At the moment I use 2.6.31 from Jeremy, does anyone know if it''s fixed in 2.6.32 or newer tree? Regards, Stefan Kuhne _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2008 Dec 10
0
domU, Failed to obtain physical IRQ, e1000 Intel NIC
Hello all. I''ve upgraded my drives, and in doing so loaded FC8. Latest kernel-xen.x86_64 (2.6.21.7-5) and xen.x86_64 (3.1.2-5) available, using 2 Intel NICs with e1000 driver. All worked fine on FC5 with custom domU FC5 with pcifront and NIC drivers in kernel. Now, I''m unable to get the NICs to function inside my domU. The are visible in lspci, and ipconfig. DomU dmesg
2007 Mar 02
2
Booting with PXE on Intel Pro GT
Hello all, I must admit that I am new to the Linux/PXE world, and so, am still learning a lot. I am currently in the 2nd semester of a senior project at Weber State University in Utah, trying like mad to get a diskless Beowulf Cluster up and running. I was given a grant to purchase 4 nodes running this equipment: Pentium D dual core Processor MSI 965 Motherboard Intel Pro/1000 GT PCI adapter
2018 Apr 16
1
Math kernel library from intel
I use win 10 ( 64 bit) with latest R available. Intel has released math kernel library. Is it necessary to install for data driven work ? Regards Partha [[alternative HTML version deleted]]
2007 Dec 06
6
DomU (Centos 5) with dedicated e1000 (intel) device dropping packets
Hello everybody, I''ve finished with pci export from DomU to Dom0 (Debian Etch) but now i have a new problem, and a big one. My ethernet card is dropping packets but after some time (i can''t tell how) It can work for a day (not in production so not hard tested) and then all packets are dropped. Look at the ifconfig output : eth0      Link encap:Ethernet  HWaddr
2005 May 11
3
problem with the pro/1000 driver for intel gigabit ethernet card
I encountered a problem with dom0 when rebooting and running an intel gigabit card using the pro/1000 driver. When the system tries to shut down I get the following output of an strace command : ......... 17:10:49.063616 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3 <0.000029> 17:10:49.063689 ioctl(3, 0x8913, 0xbffffc80) = 0 <0.000018> 17:10:49.063751 ioctl(3, 0x8914, 0xbffffc80) =
2005 Oct 03
2
ethool for e1000
I recently noticed that after starting xend ethtool no longer work for my e1000 card. In my 2.X box which is a P4, ethtool is working after xend start. Same version of e1000 on both boxes. The unstable box is a Tyan 2462 SMP, FC4 dom0 The 2.X box is Dell P330 UP, Centos 4.1 domO Until xend start ethtool is fine, in both setups I am using the e1000 as eth0. Regards, Ted
2006 Jun 25
3
e1000 nic problem
Hi, I've been experiencing occassional network time-outs with the Intel Gigabit nic's (e1000) in Poweredge systems. I'm not sure if it's a hard or software problem but it occurs on different systems with Centos 4.2 and 4.3 and was wondering if there' s a workaround available like compiling an updated e1000 module or something. kind regards, Geert -------------- next part
2018 May 10
0
Re: e1000 network interface takes a long time to set the link ready
On 05/10/2018 02:53 PM, Ihar Hrachyshka wrote: > Hi, > > In kubevirt, we discovered [1] that whenever e1000 is used for vNIC, > link on the interface becomes ready several seconds after 'ifup' is > executed What is your definition of "becomes ready"? Are you looking at the output of "ip link show" in the guest? Or are you watching "brctl
2011 Sep 23
4
Problems with Intel Ethernet and module e1000e
Hi all, I'm facing a serious problem with the e100e kernel module for Intel 82574L gigabit nics on Centos 6. The device eth0 suddenly stops working i.e. no more networking. When I do ifconfig from console I get eth0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:EA inet6 addr: fe80::225:90ff:fe50:8fea/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
2013 Jun 09
2
Intel 82579V ethernet nic using only one IRQ? (Dual TX RX queues)
Hello CentOS Community, So I bought a NIC (using the Intel 82579V chipset) that supports Dual RX and TX queues , my goal with that is so iam able to see and use two IRQs in /proc/interrupts and hence be able to take advantage of 2 cores on my CPU. Instead I only see one IRQ being used, and hence only one CPU core.. Please see below. Advisement is appreciated! Alex
2018 May 11
0
Re: e1000 network interface takes a long time to set the link ready
On Thu, May 10, 2018 at 11:53:23AM -0700, Ihar Hrachyshka wrote: > Hi, > > In kubevirt, we discovered [1] that whenever e1000 is used for vNIC, > link on the interface becomes ready several seconds after 'ifup' is > executed, which for some buggy images like cirros may slow down boot > process for up to 1 minute [2]. If we switch from e1000 to virtio, the > link is
2018 May 10
0
Re: e1000 network interface takes a long time to set the link ready
Hi, try to use virtio instead... Atte. Daniel Romero P. On Thu, May 10, 2018 at 3:53 PM, Ihar Hrachyshka <ihrachys@redhat.com> wrote: > Hi, > > In kubevirt, we discovered [1] that whenever e1000 is used for vNIC, > link on the interface becomes ready several seconds after 'ifup' is > executed, which for some buggy images like cirros may slow down boot > process
2017 Mar 30
2
2.6.0-28.el7_3.6.1 e1000 problem
Hello! We tried to move Windows 2003 VM with e1000 driver from Centos 7 which runs qemu-kvm-0.12.1.2-2.491.el6_8.7.x86_64 to Centos 7 with qemu-kvm-ev-2.6.0-28.el7_3.6.1.x86_64 and we got problems- tcp sessions, namely smb connections, randomly drops. We didn't test previous qemu-rhev with this VM, so we don't know how it works in them. Could you tell me is this known problem? Any