Tomasz Chmielewski
2006-Nov-30 14:08 UTC
[Xen-users] big latency, packet losses with HVM guests
Networking in Windows (2003 R2) looks a bit problematic: 1. Latency is very big - this is made from domU, there is almost no load on dom0 and domU (Windows): # ping 192.168.111.186 PING 192.168.111.186 (192.168.111.186) 56(84) bytes of data. 64 bytes from 192.168.111.186: icmp_seq=1 ttl=128 time=4.06 ms 64 bytes from 192.168.111.186: icmp_seq=2 ttl=128 time=4.82 ms 64 bytes from 192.168.111.186: icmp_seq=3 ttl=128 time=6.25 ms 64 bytes from 192.168.111.186: icmp_seq=4 ttl=128 time=7.70 ms 64 bytes from 192.168.111.186: icmp_seq=5 ttl=128 time=9.14 ms 64 bytes from 192.168.111.186: icmp_seq=6 ttl=128 time=0.580 ms 64 bytes from 192.168.111.186: icmp_seq=7 ttl=128 time=2.01 ms 64 bytes from 192.168.111.186: icmp_seq=8 ttl=128 time=3.49 ms 64 bytes from 192.168.111.186: icmp_seq=9 ttl=128 time=4.89 ms 64 bytes from 192.168.111.186: icmp_seq=10 ttl=128 time=6.33 ms --- 192.168.111.186 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 9000ms rtt min/avg/max/mdev = 0.580/4.931/9.144/2.437 ms For non-HVM guests I have latency of about 0.074 ms. 2. There are slight packet losses: # ping -c 10000 -f 192.168.111.186 PING 192.168.111.186 (192.168.111.186) 56(84) bytes of data. ................ --- 192.168.111.186 ping statistics --- 10000 packets transmitted, 9984 received, 0% packet loss, time 4708ms rtt min/avg/max/mdev = 0.295/0.400/9.534/0.345 ms, ipg/ewma 0.470/0.438 ms For non-HVM guestes I have no packet losses. Is it because of the qemu/realtek network driver? -- Tomasz Chmielewski http://wpkg.org _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Petersson, Mats
2006-Nov-30 14:20 UTC
RE: [Xen-users] big latency, packet losses with HVM guests
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of > Tomasz Chmielewski > Sent: 30 November 2006 14:08 > To: Xen-users@lists.xensource.com > Subject: [Xen-users] big latency, packet losses with HVM guests > > Networking in Windows (2003 R2) looks a bit problematic: > > 1. Latency is very big - this is made from domU, there is > almost no load > on dom0 and domU (Windows): > > # ping 192.168.111.186 > PING 192.168.111.186 (192.168.111.186) 56(84) bytes of data. > 64 bytes from 192.168.111.186: icmp_seq=1 ttl=128 time=4.06 ms > 64 bytes from 192.168.111.186: icmp_seq=2 ttl=128 time=4.82 ms > 64 bytes from 192.168.111.186: icmp_seq=3 ttl=128 time=6.25 ms > 64 bytes from 192.168.111.186: icmp_seq=4 ttl=128 time=7.70 ms > 64 bytes from 192.168.111.186: icmp_seq=5 ttl=128 time=9.14 ms > 64 bytes from 192.168.111.186: icmp_seq=6 ttl=128 time=0.580 ms > 64 bytes from 192.168.111.186: icmp_seq=7 ttl=128 time=2.01 ms > 64 bytes from 192.168.111.186: icmp_seq=8 ttl=128 time=3.49 ms > 64 bytes from 192.168.111.186: icmp_seq=9 ttl=128 time=4.89 ms > 64 bytes from 192.168.111.186: icmp_seq=10 ttl=128 time=6.33 ms > > --- 192.168.111.186 ping statistics --- > 10 packets transmitted, 10 received, 0% packet loss, time 9000ms > rtt min/avg/max/mdev = 0.580/4.931/9.144/2.437 ms > > > For non-HVM guests I have latency of about 0.074 ms. > > > 2. There are slight packet losses: > > # ping -c 10000 -f 192.168.111.186 > PING 192.168.111.186 (192.168.111.186) 56(84) bytes of data. > ................ > --- 192.168.111.186 ping statistics --- > 10000 packets transmitted, 9984 received, 0% packet loss, time 4708ms > rtt min/avg/max/mdev = 0.295/0.400/9.534/0.345 ms, ipg/ewma > 0.470/0.438 ms > > > For non-HVM guestes I have no packet losses. > > > Is it because of the qemu/realtek network driver?Not directly. But the latency has something to do with the fact that each part of a transaction for the network driver goes from the guest, into the HVM code in Xen, then to qemu via Dom0. If Dom0 is "busy" doing something else, then it may also be a delay before QEMU is started. Since one network packet consists of more than one transaction to/from QEMU, the latency _WILL_ be noticable. Packet losses are probably also related to the fact that you have high latency, at least indirectly - essentially that you haven''t got enough time to process all the packets that get sent from one place to the next before the buffers are full and packets are dropped. You may find that if you can run one CPU for Dom0 and another for DomU (say in a dual core system), you may get better performance than if you run both CPU''s for both Dom0 and DomU. -- Mats> > > -- > Tomasz Chmielewski > http://wpkg.org > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tomasz Chmielewski
2006-Nov-30 14:45 UTC
RE: [Xen-users] big latency, packet losses with HVM guests
> You may find that if you can run one CPU for Dom0 and another for DomU > (say in a dual core system), you may get better performance than if you > run both CPU''s for both Dom0 and DomU.One CPU of dom0 should be enough. How can I start dom0 on just one CPU? Some option passed via grub maybe? Because setting "(dom0-cpus 1)" in xend-config.sxp doesn''t seem to work: # xm dmesg|grep CPU (XEN) Initializing CPU#0 (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 0 (XEN) CPU: Processor Core ID: 0 (XEN) Intel machine check reporting enabled on CPU#0. (XEN) CPU0: Intel(R) Xeon(R) CPU 3050 @ 2.13GHz stepping 06 (XEN) Initializing CPU#1 (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 0 (XEN) CPU: Processor Core ID: 1 (XEN) Intel machine check reporting enabled on CPU#1. (XEN) CPU1: Intel(R) Xeon(R) CPU 3050 @ 2.13GHz stepping 06 (XEN) checking TSC synchronization across 2 CPUs: passed. (XEN) Brought up 2 CPUs (XEN) Dom0 has maximum 2 VCPUs Or perhaps it won''t work with dual-core CPUs (it''s really one CPU)? -- Tomasz Chmielewski http://wpkg.org _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Petersson, Mats
2006-Nov-30 14:57 UTC
RE: [Xen-users] big latency, packet losses with HVM guests
> -----Original Message----- > From: Tomasz Chmielewski [mailto:mangoo@wpkg.org] > Sent: 30 November 2006 14:46 > To: Petersson, Mats > Cc: Tomasz Chmielewski; xen-users@lists.xensource.com > Subject: RE: [Xen-users] big latency, packet losses with HVM guests > > > You may find that if you can run one CPU for Dom0 and > another for DomU > > (say in a dual core system), you may get better performance > than if you > > run both CPU''s for both Dom0 and DomU. > > One CPU of dom0 should be enough. > How can I start dom0 on just one CPU? Some option passed via > grub maybe? > > Because setting "(dom0-cpus 1)" in xend-config.sxp doesn''t > seem to work: > > # xm dmesg|grep CPU > (XEN) Initializing CPU#0 > (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K > (XEN) CPU: L2 cache: 2048K > (XEN) CPU: Physical Processor ID: 0 > (XEN) CPU: Processor Core ID: 0 > (XEN) Intel machine check reporting enabled on CPU#0. > (XEN) CPU0: Intel(R) Xeon(R) CPU 3050 @ 2.13GHz > stepping 06 > (XEN) Initializing CPU#1 > (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K > (XEN) CPU: L2 cache: 2048K > (XEN) CPU: Physical Processor ID: 0 > (XEN) CPU: Processor Core ID: 1 > (XEN) Intel machine check reporting enabled on CPU#1. > (XEN) CPU1: Intel(R) Xeon(R) CPU 3050 @ 2.13GHz > stepping 06 > (XEN) checking TSC synchronization across 2 CPUs: passed. > (XEN) Brought up 2 CPUs > (XEN) Dom0 has maximum 2 VCPUs > > > Or perhaps it won''t work with dual-core CPUs (it''s really one CPU)?Well, it''s only one CPU in the sense that it occupies one socket, rather than the number of actual "Central Processing Units" it uses. In fact some of the early Intel ones are even separate chips mounted in one package - but either way, there is definitely two separate units, so the OS should behave just like if you have two sockets for two single-core CPU''s [except for the NUMA awareness and some other things where the "which socket this is matters"]. You can use "xm vcpu-set 0 1" to set the number of CPU''s in Dom0. And you probably also want to use "xm vcpu-pin 0 0 0" to force the first core to be Dom0, and "xm vcpu-pin <domu-id> 0 1" to set the second domain to run on the second core. -- Mats> > > > -- > Tomasz Chmielewski > http://wpkg.org > > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users