I'm just starting to take a look at guest networking performance and am a little disappointed. I'm comparing two setups: Host: Windows Server 2008 R2 Hyper-V Guest: CentOS 5.5 x86_64 Host: CentOS 5.5 x86_64 kvm running libvirt Guest: CentOS 5.5 x86_64 The guests are essentially identical except that I'm running the Microsoft Linux Integration Components synthetic drivers on the windows hosted VM. The libvirt setup uses bridged networking. Running bonnie++ on a nfs mounted filesystem on each guest I'm seeing the libvirt hosted guest get between 16%-35% of the performance of the Hyper-V guest. Is this expected? Is there anything I can do to increase network performance of the kvm guest? -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA/CoRA Division FAX: 303-415-9702 3380 Mitchell Lane orion at cora.nwra.com Boulder, CO 80301 http://www.cora.nwra.com
On 02/02/2011, at 4:39 AM, Orion Poplawski wrote:> I'm just starting to take a look at guest networking performance and am a little disappointed. I'm comparing two setups: > > Host: Windows Server 2008 R2 Hyper-V > Guest: CentOS 5.5 x86_64 > > Host: CentOS 5.5 x86_64 kvm running libvirt > Guest: CentOS 5.5 x86_64 > > The guests are essentially identical except that I'm running the Microsoft Linux Integration Components synthetic drivers on the windows hosted VM. The libvirt setup uses bridged networking. Running bonnie++ on a nfs mounted filesystem on each guest I'm seeing the libvirt hosted guest get between 16%-35% of the performance of the Hyper-V guest. Is this expected? Is there anything I can do to increase network performance of the kvm guest?Hi Orion, Just as an initial question, if the CentOS 5.5 guest using the VirtIO network drivers? It's been ages since I used CentOS 5.x, so I don't remember if they're the default or not. That's the initial "big performance boost" thing that's needed over the emulation type drivers. To check, take a look at the XML definition for the guest, and look at the networking interface. There should be an element there called "model", and it will contain the type of network card being emulated. If it's anything other than "virtio" then an emulated network driver interface is being used. (not real fast) I'd give you a direct URL for the XML to reference, but ironically the libvirt.org server appears to be offline at the moment (doesn't happen very often thankfully!). Heh. Hope that helps. Regards and best wishes, Justin Clift
On 2/1/2011 12:39 PM, Orion Poplawski wrote:> I'm just starting to take a look at guest networking performance and am > a little disappointed. I'm comparing two setups: > > Host: Windows Server 2008 R2 Hyper-V > Guest: CentOS 5.5 x86_64 > > Host: CentOS 5.5 x86_64 kvm running libvirt > Guest: CentOS 5.5 x86_64 > > The guests are essentially identical except that I'm running the > Microsoft Linux Integration Components synthetic drivers on the windows > hosted VM. The libvirt setup uses bridged networking. Running bonnie++ > on a nfs mounted filesystem on each guest I'm seeing the libvirt hosted > guest get between 16%-35% of the performance of the Hyper-V guest. Is > this expected? Is there anything I can do to increase network > performance of the kvm guest? >First thing is to stop unfairly comparing things that don't even claim to do the same job. hyper-v is a hypervisor, while kvm is not, xen is. It would be closer but still unfair, to compare qemu or virtualbox for windows to kvm. You didn't say what kind of networking is being used wth hyper-v, but it's an understood fact that bridgeing in linux is easy to use and less efficient than routing or vlan or macvlan. So I guess the answer is use xen and something other than bridging. -- bkw