Hi, I have a server using its 4 physical network interfaces bonded, with the bonding interface added to a bridge. The bridge has the IP, and three VMs are using the bridge. Two of the VMs are running Debian, one is running Windoze 7. CPU load caused by the qemu-kvm processes is way higher than I?m happy with. One of the Debian machine causes around 22% while it?s basically idle, the other one is around 3%, and the Windoze one is around 50. They are all mostly idle. I can observe that when some network traffic is going on with the Windoze machine, it causes a CPU load of 200%. "Some network traffic" means that virt-top is showing 1M/2M RX/TX. Considering that the bonding interface is theoretically capable of handling 4Gbit full duplex, the 3Mbit are neglectable. Virtio drivers are being used. Currently, virt-top shows 1.9% CPU for the Windoze machine and top shows 22% CPU load for the corresponding qemu-kvm process. There is almost not network traffic. The VM has 4 CPUs assigned. What may cause the high CPU load? Something must be seriously wrong for an idle machine causing 22% CPU load and for the same machine, still basically idle, with a some network traffic to cause 200% CPU load on a Xeon 5690.
On 06/02/2017 04:32 AM, hw wrote:> What may cause the high CPU load?Offhand, it's hard to say. I don't see similar behavior. Can you post the libvirt XML definitions for those VMs somewhere? pastebin maybe? What's the output of "rpm -qa qemu\*"?
Gordon Messmer wrote:> On 06/02/2017 04:32 AM, hw wrote: >> What may cause the high CPU load? > > > Offhand, it's hard to say. I don't see similar behavior. Can you post the libvirt XML definitions for those VMs somewhere? pastebin maybe? What's the output of "rpm -qa qemu\*"?qemu-img-1.5.3-126.el7_3.6.x86_64 qemu-kvm-tools-1.5.3-126.el7_3.6.x86_64 qemu-kvm-common-1.5.3-126.el7_3.6.x86_64 qemu-kvm-1.5.3-126.el7_3.6.x86_64 The definitions aren?t too long, I could post them here. There?s nothing special about them AFAICT; I disabled USB and am trying to use kvmclock. I?m finding the number of "Local timer interrupts" suspicious. From 'cat /proc/interrupts' for CPU0: Tue Jun 6 20:01:53 CEST 2017: 217433736 Thu Jun 8 13:23:04 CEST 2017: 350172149 That seems an awful lot of interrupts. Is this normal? There?s also a huge amount of "Rescheduling interrupts" (102113959 earlier, now 209740910). The VMs are pinned to CPUs, so what?s being rescheduled so frequently? I can observe that CPU load of the host goes up with increases in network traffic of the guest. Is it a bad idea to assign a bonding interface to a bridge?> > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos
Apparently Analagous Threads
- [LLVMdev] Bay Area LLVM Social - July
- [LLVMdev] Bay Area LLVM Social - July
- [LLVMdev] Bay Area LLVM Social - July
- [PATCH net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi
- [PATCH net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi