Philip Nelson
2010-Jan-31 04:14 UTC
[libvirt-users] poor network performance to one of two guests
G'day, I have a host running two kvm guests. One of them gets very poor network performance, testing with iperf I get ~10MBit/sec to guest A, >400MBit/sec to guest B (running iperf between the host/guest). Both guests are using the same bridge: Guest A: <interface type='bridge'> <mac address='54:52:00:75:24:91'/> <source bridge='br0'/> </interface> Guest B: <interface type='bridge'> <mac address='54:52:00:3b:13:2a'/> <source bridge='br0'/> </interface> The host and guests are debian lenny, the host I have updated with some packages from backports.org. The host is running 2.6.30 with qemu-kvm 0.12.2 and libvirt 0.7.5. The host has 8gig ram and a quad core xeon cpu. Each guest is given one vcpu, guest A has 3gig memory, guest B 1gig. I have tried starting them in a different order so that one gets vnet0 and one gets vnet1, it doesn't make a difference, it's always guest A that is slow. I'm a bit stuck here, I don't know what else to try. Below is some information for the host/guests, not sure what might be helpful: ifconfig eth1/br0/vnet0/1 output from the host: eth1 Link encap:Ethernet HWaddr 00:25:64:3b:65:de inet6 addr: fe80::225:64ff:fe3b:65de/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7828 errors:0 dropped:0 overruns:0 frame:0 TX packets:6664 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4034843 (3.8 MiB) TX bytes:466838 (455.8 KiB) Interrupt:17 br0 Link encap:Ethernet HWaddr 00:25:64:3b:65:de inet addr:192.168.108.1 Bcast:192.168.108.255 Mask:255.255.255.0 inet6 addr: fe80::225:64ff:fe3b:65de/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:110047 errors:0 dropped:0 overruns:0 frame:0 TX packets:111420 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5744145 (5.4 MiB) TX bytes:870431296 (830.1 MiB) vnet0 Link encap:Ethernet HWaddr 72:48:bc:10:45:d4 inet6 addr: fe80::7048:bcff:fe10:45d4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:107881 errors:0 dropped:0 overruns:0 frame:0 TX packets:581468 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:7129781 (6.7 MiB) TX bytes:873414210 (832.9 MiB) vnet1 Link encap:Ethernet HWaddr 36:20:fc:df:52:66 inet6 addr: fe80::3420:fcff:fedf:5266/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7918 errors:0 dropped:0 overruns:0 frame:0 TX packets:21692 errors:0 dropped:0 overruns:16236 carrier:0 collisions:0 txqueuelen:500 RX bytes:534843 (522.3 KiB) TX bytes:32575099 (31.0 MiB) # brctl show: bridge name bridge id STP enabled interfaces br0 8000.0025643b65de no eth1 vnet0 vnet1 # lsmod | grep kvm kvm_intel 47336 6 kvm 159032 1 kvm_intel ifconfig output from guest A: eth0 Link encap:Ethernet HWaddr 54:52:00:75:24:91 inet addr:192.168.108.100 Bcast:192.168.108.0 Mask:255.255.255.0 inet6 addr: fe80::5652:ff:fe75:2491/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21708 errors:0 dropped:0 overruns:0 frame:0 TX packets:8155 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32576795 (31.0 MiB) TX bytes:553329 (540.3 KiB) Interrupt:10 ifconfig output from guest B: eth0 Link encap:Ethernet HWaddr 54:52:00:3b:13:2a inet addr:192.168.108.101 Bcast:192.168.108.0 Mask:255.255.255.0 inet6 addr: fe80::5652:ff:fe3b:132a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:582733 errors:0 dropped:18 overruns:0 frame:0 TX packets:109328 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:865856730 (825.7 MiB) TX bytes:7230359 (6.8 MiB) Interrupt:10 Base address:0xe000 Here is the guest A definition: <domain type='kvm'> <name>guestA</name> <uuid>a32a9d3b-e5ae-5943-c6db-52bc118f99c6</uuid> <memory>3145728</memory> <currentMemory>3145728</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/libvirt/images/guestA'/> <target dev='vda' bus='virtio'/> </disk> <disk type='file' device='disk'> <source file='/var/lib/libvirt/images/guestA-data.img'/> <target dev='vdb' bus='virtio'/> </disk> <interface type='bridge'> <mac address='54:52:00:75:24:91'/> <source bridge='br0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> </devices> </domain> And guest B: <domain type='kvm'> <name>guestB</name> <uuid>1816a68e-929c-0b36-41ac-a380a4062298</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/libvirt/images/guestB'/> <target dev='vda' bus='virtio'/> </disk> <disk type='file' device='disk'> <source file='/var/lib/libvirt/images/guestB-data.img'/> <target dev='vdb' bus='virtio'/> </disk> <interface type='bridge'> <mac address='54:52:00:3b:13:2a'/> <source bridge='br0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> </devices> </domain> Any help would be greatly appreciated. -- Philip
Philip Nelson
2010-Jan-31 06:53 UTC
[libvirt-users] poor network performance to one of two guests
On Sun, Jan 31, 2010 at 03:14:52PM +1100, Philip Nelson wrote:> G'day, I have a host running two kvm guests. One of them gets very poor network > performanceI think I've figured this out. I created a 3rd guest to test and with iperf I was getting nearly 2Gbit/sec. I found the only difference was the interface had <model type='virtio'/>, so I added that to guest A and then had much better network performance. I'm guessing that was ommitted because I had used an older version of libvirt/virtinst when initially creating the guest (whichever version comes with lenny). I still don't know why one guest was much slower than the other, but it seems like it's all good now. Cheers -- Philip