Hi, Netperf tells me that a KVM VM using a bridged connection to a system across the Gigabit LAN sees on average about 1/10th of the bandwidth the host sees. This is on very lightly loaded hosts, although with multiple low-use VMs. I've tried adjusting the CPU throttling after seeing that mentioned as a possible factor in slowing down a bridge, but it has not much if any effect here. Is there a good guide somewhere to what might make a difference with this? I recall the kernel recently adding a new alternative for handling VM interfaces, but can't recall what it was called. Would that be a marked improvement? Is it supported by libvirt? Thanks, Whit
On Fri, Jun 01, 2012 at 05:15:35PM -0400, Whit Blauvelt wrote:> Netperf tells me that a KVM VM using a bridged connection to a system across > the Gigabit LAN sees on average about 1/10th of the bandwidth the host sees.What NIC are you using? For example with the Intel 10 GigE cards (ixgbe) it is essential to turn off LRO using ethtool when using bridges, otherwise the performance sucks. And, surely you're using the paravirtualized virtio interface in your guest?> I recall the kernel recently adding a new alternative for handling VM > interfaces, but can't recall what it was called. Would that be a marked > improvement? Is it supported by libvirt?Look for SR-IOV.
On Jun 1, 2012 2:17 PM, "Whit Blauvelt" <whit.virt at transpect.com> wrote:> > I recall the kernel recently adding a new alternative for handling VM > interfaces, but can't recall what it was called. Would that be a marked > improvement? Is it supported by libvirt?Linux Bridge < macvtap < SR-IOV -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://listman.redhat.com/archives/libvirt-users/attachments/20120602/70283dea/attachment.htm>
On Tue, Jun 12, 2012 at 11:35:28AM +0400, Andrey Korolyov wrote:> Just a stupid question: did you pin guest vcpus or NIC` hw queues? > Unpinned vm may seriously affect I/O performance when running on same > core set as NIC(hwraid/fc/etc).Thanks. No question is stupid. Obviously, this shouldn't be so slow at VM I/O so I'm missing something. In my defense, there is no single coherent set of documents on this stuff, unless those are kept in a secret place. It would be a fine thing if a few of the people who know all the "obvious" stuff about libvirt-based KVM configuration would collaborate and document it fully somewhere. When I Google "pinned vcpu" all the top responses are about Xen. I run KVM. I find mention that "KVM uses the linux scheduler for distributing workload rather than actually assigning physical CPUs to VMs," at http://serverfault.com/questions/235143/can-i-provision-half-a-core-as-a-virtual-cpu. Is that wrong? Are you suggesting RAID will end up on one CPU core, and if the VM ends up on the same core there's I/O trouble? I can see that for software RAID. But we're running hardware RAID. Isn't that handled in the hardware of the RAID controller? Isn't that the point of hardware RAID, to keep it off the CPU? In any case the slow IO is true of all the VMs, so presumably they're properly spread out over CPU cores (of which there are more than VMs, and each VM is configured to take only one), which would rule out the general problem being the shared use of any one core by other system processes. Whit