This is a problem I've had on and off under CentOS5 and CentOS6, with both xen and kvm. Currently, it happens consistently with kvm on 6.5, e.g. with every kernel update. I *think* it generally worked fine with the 6.4 kernels. There are 7 VMs running on a 6.5, x86_64, 8GB RAM host, each with 512MB RAM and using the e1000 NIC. I picked this specific NIC because the default does not allow reliable monitoring through SNMP (IIRC). The host has two bonded NICs with br0 running on top. When the host reboots, the VMs will generally hang bringing up the virtual NIC, and I need to go through several iterations of destroy/create, for each VM, to get them running. The always hang here (copy&paste from console): ... Welcome to CentOS Starting udev: udev: starting version 147 piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI e1000: Copyright (c) 1999-2006 Intel Corporation. ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 e1000 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 e1000 0000:00:03.0: eth0: (PCI:33MHz:32-bit) 00:16:3e:52:e3:0b e1000 0000:00:03.0: eth0: Intel(R) PRO/1000 Network Connection Any suggestions on where to start looking?
NetworkManager and system-config-network do not really handle pair bonding very well, so you've obviously set it up by hand. this is the point where, getting a paid license RHEL license for your KVM server gets you direct access to their support team. In particular, post your bridge settings. I think they should be set to "failover", not to the other, more complex and load balanced settings, to avoid confusing your switches and possibly KVM clients. On Wed, Mar 26, 2014 at 7:20 AM, Lars Hecking <lhecking at users.sourceforge.net> wrote:> > This is a problem I've had on and off under CentOS5 and CentOS6, with both > xen and kvm. Currently, it happens consistently with kvm on 6.5, e.g. with > every kernel update. I *think* it generally worked fine with the 6.4 kernels. > > There are 7 VMs running on a 6.5, x86_64, 8GB RAM host, each with 512MB RAM > and using the e1000 NIC. I picked this specific NIC because the default does > not allow reliable monitoring through SNMP (IIRC). The host has two bonded > NICs with br0 running on top. > > When the host reboots, the VMs will generally hang bringing up the virtual > NIC, and I need to go through several iterations of destroy/create, for each > VM, to get them running. The always hang here (copy&paste from console): > > ... > Welcome to CentOS > Starting udev: udev: starting version 147 > piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 > e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI > e1000: Copyright (c) 1999-2006 Intel Corporation. > ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 > e1000 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 > e1000 0000:00:03.0: eth0: (PCI:33MHz:32-bit) 00:16:3e:52:e3:0b > e1000 0000:00:03.0: eth0: Intel(R) PRO/1000 Network Connection > > Any suggestions on where to start looking? > > _______________________________________________ > CentOS-virt mailing list > CentOS-virt at centos.org > http://lists.centos.org/mailman/listinfo/centos-virt
On Wed, Mar 26, 2014 at 11:20 AM, Lars Hecking <lhecking at users.sourceforge.net> wrote:> > This is a problem I've had on and off under CentOS5 and CentOS6, with both > xen and kvm. Currently, it happens consistently with kvm on 6.5, e.g. with > every kernel update. I *think* it generally worked fine with the 6.4 kernels. > > There are 7 VMs running on a 6.5, x86_64, 8GB RAM host, each with 512MB RAM > and using the e1000 NIC. I picked this specific NIC because the default does > not allow reliable monitoring through SNMP (IIRC). The host has two bonded > NICs with br0 running on top. > > When the host reboots, the VMs will generally hang bringing up the virtual > NIC, and I need to go through several iterations of destroy/create, for each > VM, to get them running. The always hang here (copy&paste from console): > > ... > Welcome to CentOS > Starting udev: udev: starting version 147 > piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 > e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI > e1000: Copyright (c) 1999-2006 Intel Corporation. > ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 > e1000 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 > e1000 0000:00:03.0: eth0: (PCI:33MHz:32-bit) 00:16:3e:52:e3:0b > e1000 0000:00:03.0: eth0: Intel(R) PRO/1000 Network Connection > > Any suggestions on where to start looking?Have you tried other virtual network cards, and/or PV network (netback for Xen or virtio for KVM)? That would help you isolate whether the problem was in the e1000 emulation (which I suspect is shared between KVM and Xen) or in the host network configuration (which, it sounds like, is non-trivial). -George