About ping latency in PV-on-HVM I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model. I think there would be slightly latency mostly(less than 1ms). But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment. The following is the test environment and the test result: Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. The server and client are connected through a same network bridge. The result as following: # ping -i 1 -c 10000 192.18.22.72 | grep -v "time=0" PING 192.18.22.72 (192.18.22.72) 56(84) bytes of data. 64 bytes from 192.18.22.72: icmp_seq=125 ttl=64 time=6.78 ms 64 bytes from 192.18.22.72: icmp_seq=244 ttl=64 time=1.54 ms 64 bytes from 192.18.22.72: icmp_seq=510 ttl=64 time=10.4 ms 64 bytes from 192.18.22.72: icmp_seq=597 ttl=64 time=2.90 ms 64 bytes from 192.18.22.72: icmp_seq=883 ttl=64 time=1.60 ms 64 bytes from 192.18.22.72: icmp_seq=968 ttl=64 time=4.26 ms 64 bytes from 192.18.22.72: icmp_seq=1328 ttl=64 time=6.20 ms 64 bytes from 192.18.22.72: icmp_seq=1520 ttl=64 time=2.78 ms 64 bytes from 192.18.22.72: icmp_seq=1606 ttl=64 time=27.4 ms 64 bytes from 192.18.22.72: icmp_seq=1959 ttl=64 time=1.91 ms 64 bytes from 192.18.22.72: icmp_seq=2210 ttl=64 time=6.98 ms 64 bytes from 192.18.22.72: icmp_seq=2381 ttl=64 time=3.65 ms 64 bytes from 192.18.22.72: icmp_seq=2447 ttl=64 time=26.4 ms 64 bytes from 192.18.22.72: icmp_seq=2552 ttl=64 time=14.3 ms 64 bytes from 192.18.22.72: icmp_seq=2616 ttl=64 time=16.3 ms 64 bytes from 192.18.22.72: icmp_seq=2788 ttl=64 time=29.7 ms 64 bytes from 192.18.22.72: icmp_seq=3198 ttl=64 time=2.32 ms 64 bytes from 192.18.22.72: icmp_seq=3374 ttl=64 time=1.89 ms 64 bytes from 192.18.22.72: icmp_seq=3542 ttl=64 time=14.3 ms 64 bytes from 192.18.22.72: icmp_seq=3705 ttl=64 time=14.2 ms 64 bytes from 192.18.22.72: icmp_seq=3739 ttl=64 time=9.91 ms 64 bytes from 192.18.22.72: icmp_seq=3751 ttl=64 time=1.48 ms 64 bytes from 192.18.22.72: icmp_seq=4089 ttl=64 time=4.63 ms 64 bytes from 192.18.22.72: icmp_seq=4103 ttl=64 time=4.59 ms 64 bytes from 192.18.22.72: icmp_seq=4112 ttl=64 time=1.18 ms 64 bytes from 192.18.22.72: icmp_seq=4172 ttl=64 time=1.58 ms 64 bytes from 192.18.22.72: icmp_seq=4185 ttl=64 time=3.02 ms 64 bytes from 192.18.22.72: icmp_seq=4236 ttl=64 time=25.9 ms 64 bytes from 192.18.22.72: icmp_seq=4250 ttl=64 time=1.18 ms 64 bytes from 192.18.22.72: icmp_seq=5394 ttl=64 time=21.2 ms 64 bytes from 192.18.22.72: icmp_seq=5455 ttl=64 time=6.69 ms 64 bytes from 192.18.22.72: icmp_seq=5541 ttl=64 time=4.65 ms 64 bytes from 192.18.22.72: icmp_seq=5842 ttl=64 time=1.68 ms 64 bytes from 192.18.22.72: icmp_seq=5972 ttl=64 time=29.9 ms 64 bytes from 192.18.22.72: icmp_seq=5992 ttl=64 time=23.7 ms 64 bytes from 192.18.22.72: icmp_seq=6291 ttl=64 time=14.5 ms 64 bytes from 192.18.22.72: icmp_seq=6724 ttl=64 time=1.78 ms 64 bytes from 192.18.22.72: icmp_seq=6764 ttl=64 time=3.61 ms 64 bytes from 192.18.22.72: icmp_seq=7244 ttl=64 time=23.7 ms 64 bytes from 192.18.22.72: icmp_seq=7299 ttl=64 time=1.62 ms 64 bytes from 192.18.22.72: icmp_seq=7675 ttl=64 time=28.6 ms 64 bytes from 192.18.22.72: icmp_seq=7892 ttl=64 time=11.0 ms 64 bytes from 192.18.22.72: icmp_seq=7952 ttl=64 time=4.20 ms 64 bytes from 192.18.22.72: icmp_seq=7955 ttl=64 time=1.20 ms 64 bytes from 192.18.22.72: icmp_seq=9025 ttl=64 time=8.04 ms 64 bytes from 192.18.22.72: icmp_seq=9486 ttl=64 time=18.5 ms 64 bytes from 192.18.22.72: icmp_seq=9495 ttl=64 time=1.02 ms 64 bytes from 192.18.22.72: icmp_seq=9579 ttl=64 time=30.3 ms 64 bytes from 192.18.22.72: icmp_seq=9623 ttl=64 time=26.7 ms 64 bytes from 192.18.22.72: icmp_seq=9637 ttl=64 time=17.1 ms 64 bytes from 192.18.22.72: icmp_seq=9858 ttl=64 time=22.8 ms 64 bytes from 192.18.22.72: icmp_seq=9959 ttl=64 time=7.11 ms --- 192.18.22.72 ping statistics --- 10000 packets transmitted, 10000 received, 0% packet loss, time 9999321ms rtt min/avg/max/mdev = 0.081/0.292/30.300/1.036 ms Can someone help me, or tell me something? How to reduce the ping latency? Best Regards !! 2011-01-04 ************************************************** 沈启龙 部门 :云快线 - 运营支撑中心 - 研发中心 手机 : 18910286687 E-mail:shen.qilong@21vianet.com 地址 :北京市朝阳区酒仙桥东路1号M5楼 ************************************************** _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Stefano Stabellini
2011-Jan-05 13:47 UTC
Re: [Xen-devel] How to reduce high latency on PV-on-HVM?
On Tue, 4 Jan 2011, shen.qilong wrote:> About ping latency in PV-on-HVM > I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model. > > I think there would be slightly latency mostly(less than 1ms). > But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment. > > > The following is the test environment and the test result: > > Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. > The server and client are connected through a same network bridge. > > Can someone help me, or tell me something?If you boot your guest kernel with loglevel=9, can you see the following line among the boot messages? Xen HVM callback vector for event delivery is enabled _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2011-Jan-05 13:51 UTC
Re: [Xen-devel] How to reduce high latency on PV-on-HVM?
On Wed, Jan 05, 2011 at 01:47:01PM +0000, Stefano Stabellini wrote:> On Tue, 4 Jan 2011, shen.qilong wrote: > > About ping latency in PV-on-HVM > > I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model. > > > > I think there would be slightly latency mostly(less than 1ms). > > But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment. > > > > > > The following is the test environment and the test result: > > > > Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. > > The server and client are connected through a same network bridge. > > > > Can someone help me, or tell me something? > > If you boot your guest kernel with loglevel=9, can you see the following line > among the boot messages? > > Xen HVM callback vector for event delivery is enabledHmm.. is this message message for the optimization available in Xen 4.1 ? -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Stefano Stabellini
2011-Jan-05 13:59 UTC
Re: [Xen-devel] How to reduce high latency on PV-on-HVM?
On Wed, 5 Jan 2011, Pasi Kärkkäinen wrote:> On Wed, Jan 05, 2011 at 01:47:01PM +0000, Stefano Stabellini wrote: > > On Tue, 4 Jan 2011, shen.qilong wrote: > > > About ping latency in PV-on-HVM > > > I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model. > > > > > > I think there would be slightly latency mostly(less than 1ms). > > > But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment. > > > > > > > > > The following is the test environment and the test result: > > > > > > Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. > > > The server and client are connected through a same network bridge. > > > > > > Can someone help me, or tell me something? > > > > If you boot your guest kernel with loglevel=9, can you see the following line > > among the boot messages? > > > > Xen HVM callback vector for event delivery is enabled > > Hmm.. is this message message for the optimization available in Xen 4.1 ? >Nope, it is for the basic optimization that is in the xen-4.0-testing tree too. Looking more closely, it should be present in 4.0.1 but not in 4.0.0, so it is unlikely you have it. It would be interesting to do the same test on a more recent 4.0.x hypervisor. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Jan-05 14:09 UTC
Re: [Xen-devel] How to reduce high latency on PV-on-HVM?
On Wed, 2011-01-05 at 13:59 +0000, Stefano Stabellini wrote:> On Wed, 5 Jan 2011, Pasi Kärkkäinen wrote: > > On Wed, Jan 05, 2011 at 01:47:01PM +0000, Stefano Stabellini wrote: > > > On Tue, 4 Jan 2011, shen.qilong wrote: > > > > About ping latency in PV-on-HVM > > > > I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model. > > > > > > > > I think there would be slightly latency mostly(less than 1ms). > > > > But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment. > > > > > > > > > > > > The following is the test environment and the test result: > > > > > > > > Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. > > > > The server and client are connected through a same network bridge. > > > > > > > > Can someone help me, or tell me something? > > > > > > If you boot your guest kernel with loglevel=9, can you see the following line > > > among the boot messages? > > > > > > Xen HVM callback vector for event delivery is enabled > > > > Hmm.. is this message message for the optimization available in Xen 4.1 ? > > > > Nope, it is for the basic optimization that is in the xen-4.0-testing > tree too. > Looking more closely, it should be present in 4.0.1 but not in 4.0.0, so > it is unlikely you have it. > It would be interesting to do the same test on a more recent 4.0.x > hypervisor.It also depends on precisely which "kernel-2.6.x" is being used in the guest, doesn''t it? Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Stefano Stabellini
2011-Jan-05 14:14 UTC
Re: [Xen-devel] How to reduce high latency on PV-on-HVM?
On Wed, 5 Jan 2011, Ian Campbell wrote:> On Wed, 2011-01-05 at 13:59 +0000, Stefano Stabellini wrote: > > On Wed, 5 Jan 2011, Pasi Kärkkäinen wrote: > > > On Wed, Jan 05, 2011 at 01:47:01PM +0000, Stefano Stabellini wrote: > > > > On Tue, 4 Jan 2011, shen.qilong wrote: > > > > > About ping latency in PV-on-HVM > > > > > I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model. > > > > > > > > > > I think there would be slightly latency mostly(less than 1ms). > > > > > But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment. > > > > > > > > > > > > > > > The following is the test environment and the test result: > > > > > > > > > > Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. > > > > > The server and client are connected through a same network bridge. > > > > > > > > > > Can someone help me, or tell me something? > > > > > > > > If you boot your guest kernel with loglevel=9, can you see the following line > > > > among the boot messages? > > > > > > > > Xen HVM callback vector for event delivery is enabled > > > > > > Hmm.. is this message message for the optimization available in Xen 4.1 ? > > > > > > > Nope, it is for the basic optimization that is in the xen-4.0-testing > > tree too. > > Looking more closely, it should be present in 4.0.1 but not in 4.0.0, so > > it is unlikely you have it. > > It would be interesting to do the same test on a more recent 4.0.x > > hypervisor. > > It also depends on precisely which "kernel-2.6.x" is being used in the > guest, doesn''t it? >Of course, kernel 2.6.x is very vague. What kernel version are you actually using? Is it an upstream kernel? If so, what exact version are you using? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel