Hello, I have some problem with poor network performance on libvirt with qemu and openvswitch. I’m using libvirt 1.3.1, qemu 2.5 and openvswitch 2.6.0 on Ubuntu 16.04 currently. My connection diagram looks like below: +---------------------------+ +---------------------------+ | Net namespace | +------------------+ | OVS bridge | | | | | | | | | | VM | | | | | | | +----+---+ +----+-----+ +----+---+ | | +------+tap dev | | veth A +---------+ veth B | | | | +--------+ +----------+ +--------+ | | iperf -s<---------------------------------------------------------------------------+iperf -c | | | | | | | +------------------+ | | | | | | | | +---------------------------+ +---------------------------+ I haven’t got any QoS in tc configured on any interface. When I do this iperf test I have something about 150Mbps only. IMHO it should be something about 20-30 Gbps there. Other strange thing is that if I made more such VMS (each connected in same way, with own OVS bridge and own namespace) bandwidth is lower for each of them and it looks for me that summarize bandwidth is then something about 1Gbps (30 VMs - each got something about 30Mbps in such test). When I removed VM and added „tap dev” as internal port in OVS and made same test, then I had result about 30 Gbps. I have no idea what can be wrong there. Maybe someone of You had such problems earlier? One more thing, on different host with Ubuntu 14.04, OVS 2.0.2, Libvirt 1.3.1 and qemu 2.3 I don’t have this problem. Kernel on both hosts is 4.4.0 (from Ubuntu repo). Pozdrawiam / Best regards Sławek Kapłoński slawek@kaplonski.pl
On 05/12/2017 11:02 AM, Sławomir Kapłoński wrote:> Hello, > > I have some problem with poor network performance on libvirt with qemu and openvswitch. > I’m using libvirt 1.3.1, qemu 2.5 and openvswitch 2.6.0 on Ubuntu 16.04 currently. > My connection diagram looks like below: > > +---------------------------+ > +---------------------------+ | Net namespace | > +------------------+ | OVS bridge | | | > | | | | | | > | VM | | | | | > | | +----+---+ +----+-----+ +----+---+ | > | +------+tap dev | | veth A +---------+ veth B | | > | | +--------+ +----------+ +--------+ | > | iperf -s<---------------------------------------------------------------------------+iperf -c | > | | | | | | > +------------------+ | | | | > | | | | > +---------------------------+ +---------------------------+ > > > > I haven’t got any QoS in tc configured on any interface. When I do this iperf test I have something about 150Mbps only. IMHO it should be something about 20-30 Gbps there.There could be a lot of stuff that is suboptimal here. Firstly, you should ping your vcpus and guest memory. Then, you might want to enable multiqueue for the tap device (that way packet processing can be split into multiple vcpus). You also want to make sure, you're not overcommitting the host. BTW: you may also try setting 'noqueue' qdisc for the tap device (if supported by your kernel). Also, the guest is type of KVM, not qemu, right? And you're using virtio model for the VM's NIC. Michal
Hello, I have no queue configured on tap and veth devices, quest type is of course KVM and I’m using virtio model for VM’s NIC. What we found is that on Xenial (where performance is poor) during test ovs-vswitchd process is using 100% CPU and there are some messages in ovs logs: 2017-05-12T14:22:04.351Z|00125|poll_loop|INFO|wakeup due to [POLLIN] on fd 149 (AF_PACKET(tap27903b5e-06)(protocol=0x3)<->) at ../lib/netdev-linux.c:1139 (86% CPU usage) Identical setup (with same versions of ovs, libvirt, gemu and kernel) is working properly on Trusty. Pozdrawiam / Best regards Sławek Kapłoński slawek@kaplonski.pl> Wiadomość napisana przez Michal Privoznik <mprivozn@redhat.com> w dniu 15.05.2017, o godz. 08:27: > > On 05/12/2017 11:02 AM, Sławomir Kapłoński wrote: >> Hello, >> >> I have some problem with poor network performance on libvirt with qemu and openvswitch. >> I’m using libvirt 1.3.1, qemu 2.5 and openvswitch 2.6.0 on Ubuntu 16.04 currently. >> My connection diagram looks like below: >> >> +---------------------------+ >> +---------------------------+ | Net namespace | >> +------------------+ | OVS bridge | | | >> | | | | | | >> | VM | | | | | >> | | +----+---+ +----+-----+ +----+---+ | >> | +------+tap dev | | veth A +---------+ veth B | | >> | | +--------+ +----------+ +--------+ | >> | iperf -s<---------------------------------------------------------------------------+iperf -c | >> | | | | | | >> +------------------+ | | | | >> | | | | >> +---------------------------+ +---------------------------+ >> >> >> >> I haven’t got any QoS in tc configured on any interface. When I do this iperf test I have something about 150Mbps only. IMHO it should be something about 20-30 Gbps there. > > There could be a lot of stuff that is suboptimal here. > Firstly, you should ping your vcpus and guest memory. Then, you might > want to enable multiqueue for the tap device (that way packet processing > can be split into multiple vcpus). You also want to make sure, you're > not overcommitting the host. > BTW: you may also try setting 'noqueue' qdisc for the tap device (if > supported by your kernel). Also, the guest is type of KVM, not qemu, > right? And you're using virtio model for the VM's NIC. > > Michal