similar to: e1000 network interface takes a long time to set the link ready

Displaying 20 results from an estimated 5000 matches similar to: "e1000 network interface takes a long time to set the link ready"

2018 May 10
0
Re: e1000 network interface takes a long time to set the link ready
Hi, try to use virtio instead... Atte. Daniel Romero P. On Thu, May 10, 2018 at 3:53 PM, Ihar Hrachyshka <ihrachys@redhat.com> wrote: > Hi, > > In kubevirt, we discovered [1] that whenever e1000 is used for vNIC, > link on the interface becomes ready several seconds after 'ifup' is > executed, which for some buggy images like cirros may slow down boot > process
2018 May 10
0
Re: e1000 network interface takes a long time to set the link ready
On 05/10/2018 02:53 PM, Ihar Hrachyshka wrote: > Hi, > > In kubevirt, we discovered [1] that whenever e1000 is used for vNIC, > link on the interface becomes ready several seconds after 'ifup' is > executed What is your definition of "becomes ready"? Are you looking at the output of "ip link show" in the guest? Or are you watching "brctl
2018 May 11
0
Re: e1000 network interface takes a long time to set the link ready
On Thu, May 10, 2018 at 11:53:23AM -0700, Ihar Hrachyshka wrote: > Hi, > > In kubevirt, we discovered [1] that whenever e1000 is used for vNIC, > link on the interface becomes ready several seconds after 'ifup' is > executed, which for some buggy images like cirros may slow down boot > process for up to 1 minute [2]. If we switch from e1000 to virtio, the > link is
2019 Aug 22
2
Re: RLIMIT_MEMLOCK in container environment
On Thu, Aug 22, 2019 at 2:24 AM Daniel P. Berrangé <berrange@redhat.com> wrote: > > On Wed, Aug 21, 2019 at 01:37:21PM -0700, Ihar Hrachyshka wrote: > > Hi all, > > > > KubeVirt uses libvirtd to manage qemu VMs represented as Kubernetes > > API resources. In this case, libvirtd is running inside an > > unprivileged pod, with some host mounts / capabilities
2019 Aug 22
2
Re: RLIMIT_MEMLOCK in container environment
On Thu, Aug 22, 2019 at 12:01 PM Laine Stump <laine@redhat.com> wrote: > > On 8/22/19 10:56 AM, Ihar Hrachyshka wrote: > > On Thu, Aug 22, 2019 at 2:24 AM Daniel P. Berrangé <berrange@redhat.com> wrote: > >> > >> On Wed, Aug 21, 2019 at 01:37:21PM -0700, Ihar Hrachyshka wrote: > >>> Hi all, > >>> > >>> KubeVirt uses
2019 Aug 21
2
RLIMIT_MEMLOCK in container environment
Hi all, KubeVirt uses libvirtd to manage qemu VMs represented as Kubernetes API resources. In this case, libvirtd is running inside an unprivileged pod, with some host mounts / capabilities added to the pod, needed by libvirtd and other services. One of the capabilities libvirtd requires for successful startup inside a pod is SYS_RESOURCE. This capability is used to adjust RLIMIT_MEMLOCK ulimit
2019 Aug 24
1
Re: RLIMIT_MEMLOCK in container environment
On Fri, 23 Aug 2019, 0:27 Laine Stump, <laine@redhat.com> wrote: > (Adding Alex Williamson to Cc so he can correct any mistakes) > > On 8/22/19 4:39 PM, Ihar Hrachyshka wrote: > > On Thu, Aug 22, 2019 at 12:01 PM Laine Stump <laine@redhat.com> wrote: > >> > >> On 8/22/19 10:56 AM, Ihar Hrachyshka wrote: > >>> On Thu, Aug 22, 2019 at 2:24 AM
2018 Oct 12
2
How to explain this libvirt oddity w.r.t machine types?
Context: The baremetal host previously had QEMU 2.11. But I manually downgraded the QEMU version (via `dnf downgrade qemu-system-x86`); now it is at 2.10: $ rpm -q qemu-system-x86 qemu-system-x86-2.10.2-1.fc27.x86_64 The guest is offline. Let's see (in a couple of ways) what machine type it has while it is dormant: # virsh dumpxml cirros | grep -i machine= <type
2009 Jan 12
11
dedicated vnic IP zone not recieving unicast traffic
Hi Folks, I have a snv_105 sxce host that I just can''t get to work as expected with crossbow + zones. My test host persephone, is a virtual machine running under VMware ESXi 3.5, with 2 virtual network cards (e1000), all on the same flat network/subnet. It started life just 2 days ago with a clean install of snv_95, and I LUed to 105 yesterday. To rule out any sharing issue, the first
2009 Jun 12
6
Duplicate packets when using aggregate datalinks on bge
I opened a bug report earlier today but it doesn''t seem to have been added to the bugs database. I''m posting here in case one of the Crossbow developers might see it and confirm this behavior. Description Duplicate packets are generated whenever an aggregate is introduced into the network configuration. We''ve ruled out switch ports and physical bge interfaces as
2014 Mar 08
2
Syslinux EFI + TFTPBOOT Support
On 03/08/2014 10:06 PM, Gene Cumm wrote: >> Hi Gene, >> > Thanks. As you suggested, I did a test about 6.03-pre6, and I still got >> > the same issue. My client machine >> > still only shows: >> > ==================== >> > Getting cached packets >> > My IP is 192.168.120.1 >> > ==================== >> > The syslog log
2011 Aug 27
6
Improve speed virtual machine
Hi, I have XEN 3.3 on Centos 5.4, my network interface have link 1 Gbps I want improve speed of network interface on virtual machine, from 100Mbps to 1Gpbs. Somebody have link that show do this ? Thank . -- *Bruno Steven - Administrador de sistemas* *LPIC-2 / MCSA-Windows 2003 / CompTIA Security+ * _______________________________________________ Xen-users mailing list
2007 Feb 07
4
Problem with 2.6.11.4 kernel and e1000 driver -Correction
1) I'm actually not building a custom kernel per se, just one from the standard 2.6.11.4 with our configuration. I may be adding a custom driver, but unless I can get the standard kernel to boot,... 2) I have been told that after 2.6.11.4, support for the Infiniband driver is dropped, and we need that. If this is not the case, I would be delighted to know that and proceed accordingly.
2018 Feb 06
2
Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
Hi everyone, I hope this is the correct list to discuss this issue; please feel free to redirect me otherwise. I have a nested virtualization setup that looks as follows: - Host: Ubuntu 16.04, kernel 4.4.0 (an OpenStack Nova compute node) - L0 guest: openSUSE Leap 42.3, kernel 4.4.104-39-default - Nested guest: SLES 12, kernel 3.12.28-4-default The nested guest is configured with
2014 Mar 08
4
Syslinux EFI + TFTPBOOT Support
On Mar 8, 2014 10:08 AM, "Gene Cumm" <gene.cumm at gmail.com> wrote: > > On Mar 8, 2014 9:27 AM, "Steven Shiau" <steven at nchc.org.tw> wrote: > > > > > > > > On 03/08/2014 10:06 PM, Gene Cumm wrote: > > >> Hi Gene, > > >> > Thanks. As you suggested, I did a test about 6.03-pre6, and I still got > >
2019 Aug 22
0
Re: RLIMIT_MEMLOCK in container environment
(Adding Alex Williamson to Cc so he can correct any mistakes) On 8/22/19 4:39 PM, Ihar Hrachyshka wrote: > On Thu, Aug 22, 2019 at 12:01 PM Laine Stump <laine@redhat.com> wrote: >> >> On 8/22/19 10:56 AM, Ihar Hrachyshka wrote: >>> On Thu, Aug 22, 2019 at 2:24 AM Daniel P. Berrangé <berrange@redhat.com> wrote: >>>> >>>> On Wed, Aug 21,
2019 Aug 22
0
Re: RLIMIT_MEMLOCK in container environment
On 8/22/19 10:56 AM, Ihar Hrachyshka wrote: > On Thu, Aug 22, 2019 at 2:24 AM Daniel P. Berrangé <berrange@redhat.com> wrote: >> >> On Wed, Aug 21, 2019 at 01:37:21PM -0700, Ihar Hrachyshka wrote: >>> Hi all, >>> >>> KubeVirt uses libvirtd to manage qemu VMs represented as Kubernetes >>> API resources. In this case, libvirtd is running
2020 Jan 17
3
Centos 8 and E1000 intel driver
Folks I know that support for the network adaptors supported by the 'e1000' driver have been removed from the base distribution. However, I have exactly that controller (Broadcom Gigabit Ethernet PCI, not PCIe). Is there a way for me to add support for that on Centos 8.1? Perhaps a driver in an RPM package? Thanks David
2010 Dec 27
2
E1000 eth1 link flakiness - causes??
Have you experienced this? What's going on when this occurs? What do I need to do to keep it from occurring? Please advise. Thanks. Dec 4 10:18:17 localhost kernel: e1000: eth1 NIC Link is Down Dec 4 10:18:19 localhost kernel: e1000: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX Dec 4 10:18:21 localhost kernel: e1000: eth1 NIC Link is Down Dec 4 10:18:23 localhost kernel:
2006 Jun 25
3
e1000 nic problem
Hi, I've been experiencing occassional network time-outs with the Intel Gigabit nic's (e1000) in Poweredge systems. I'm not sure if it's a hard or software problem but it occurs on different systems with Centos 4.2 and 4.3 and was wondering if there' s a workaround available like compiling an updated e1000 module or something. kind regards, Geert -------------- next part