Hi, I''m noticing very bad performance when one VM is running a CPU intensive job and another VM is doing a network intensive task. For example: I run Iperf and measure the attained bandwidth with and without running a CPU Hog application at the same time. The hog app just runs in an infinite loop performing calculations. When I do this in Dom0 I get essentially the same bandwidth: 921 Mb/sec without the hog, 920 Mb/sec with. This is on a gigabit network, so that seems right. It makes sense that running the cpu hog doesn''t really affect bandwidth since the IO intensive job shouldn''t require much real computation other than negotiating protocols. If I do this inside a VM, without the hog I see 447 Mb/sec, and with the hog I see 109 Mb/sec. I can understand that there is a difference between dom0 and the VM without the hog app running due to Xen overhead, but it doesn''t seem right that there should be such a drop when the hog application is running. If the hog app is running in a separate VM, performance is even worse - only 97 Mb/sec. In all of these examples I am using the sedf scheduler with equal CPU weights for dom0 and all VMs. Desite this, in the 2 VM scenario, the scheduler ends up giving 99% of the cpu to the VM running the hog app, practically starving the IO intensive VM. I am aware that the next version of Xen uses the new credit scheduler - does anyone know if that scheduler tries to deal with these kinds of issues? The changes I had heard mostly regarded better supporting SMP. -Tim _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emmanuel Ackaouy
2006-Aug-29 17:45 UTC
Re: [Xen-users] CPU intensive VM starves IO intensive VMs
On Tue, Aug 29, 2006 at 01:31:33PM -0400, Tim Wood wrote:> I am aware that the next version of Xen uses the new credit scheduler > - does anyone know if that scheduler tries to deal with these kinds of > issues? The changes I had heard mostly regarded better supporting > SMP.Yeah the credit scheduler tries to enforce some fairness between I/O intensive and CPU bound VMs. The I/O VM should end up preempting the CPU bound one as long as it doesn''t hog the CPU itself. It should maintain good network throughput even competing against the spinner. This is another important attribute of the credit scheduler in addition to SMP load balancing. I haven''t tried Iperf but have ran similar scenarios with ttcp on the credit scheduler and have had good results. There''s also been some work done to improve performance in the netfront/netback drivers recently in xen-unstable. I''d be interested to know how moving to tip unstable affects your tests. I suspect it will help a lot. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Apparao, Padmashree K
2006-Aug-30 00:39 UTC
RE: [Xen-users] CPU intensive VM starves IO intensive VMs
Question: I think these numbers will depend very much on the physical cpus assigned to the vms. What are the number of vcpus so for each VM and what physical cpus are the VMs on. I have run similar tests and seen different results from what you see. You can get the vvpu/pcpu by doing an "xm vcpu-list" As Dom0 does the interrupt processing for the network intensive app, the network intensive app will starve if dom0 does not get enough cpu due to scheduling. It is necessary to give dom0 enough cpu to work on. - Padma -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Tim Wood Sent: Tuesday, August 29, 2006 10:32 AM To: Xen Users Subject: [Xen-users] CPU intensive VM starves IO intensive VMs Hi, I''m noticing very bad performance when one VM is running a CPU intensive job and another VM is doing a network intensive task. For example: I run Iperf and measure the attained bandwidth with and without running a CPU Hog application at the same time. The hog app just runs in an infinite loop performing calculations. When I do this in Dom0 I get essentially the same bandwidth: 921 Mb/sec without the hog, 920 Mb/sec with. This is on a gigabit network, so that seems right. It makes sense that running the cpu hog doesn''t really affect bandwidth since the IO intensive job shouldn''t require much real computation other than negotiating protocols. If I do this inside a VM, without the hog I see 447 Mb/sec, and with the hog I see 109 Mb/sec. I can understand that there is a difference between dom0 and the VM without the hog app running due to Xen overhead, but it doesn''t seem right that there should be such a drop when the hog application is running. If the hog app is running in a separate VM, performance is even worse - only 97 Mb/sec. In all of these examples I am using the sedf scheduler with equal CPU weights for dom0 and all VMs. Desite this, in the 2 VM scenario, the scheduler ends up giving 99% of the cpu to the VM running the hog app, practically starving the IO intensive VM. I am aware that the next version of Xen uses the new credit scheduler - does anyone know if that scheduler tries to deal with these kinds of issues? The changes I had heard mostly regarded better supporting SMP. -Tim _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> On 8/29/06, Apparao, Padmashree K <padmashree.k.apparao@intel.com> wrote: > What are the number of vcpus so for each VM and what physical cpus are > the VMs on. I have run similar tests and seen different results from > what you see.For these experiments I am working on a single CPU machine. Having more cpus would be nice, but I''m assuming setting the sedf CPU weights should be sufficient for preventing any VM or Dom0 from being starved.> I''d be interested to know how moving to tip unstable affects > your tests. I suspect it will help a lot.I have not tried setting up a system with xen unstable, but hopefully that helps. If anyone else wants to give it a try for me, I was just using Iperf and this script to hog cpu: ---------- #!/bin/bash while true do true done ---------- -Tim _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Diwaker Gupta
2006-Sep-01 23:56 UTC
Re: [Xen-users] CPU intensive VM starves IO intensive VMs
> If anyone else wants to give it a try for me, I was just using Iperf > and this script to hog cpu: > ---------- > #!/bin/bash > > while true > do > true > done > ----------This is not a particular good "hog" program. You might want to search the archives for "slurp" -- it consumes CPU in a better fashion, and also prints out the amount of CPU it gets. Quite handy! -- Web/Blog/Gallery: http://floatingsun.net/blog _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Diwaker Gupta
2006-Sep-01 23:58 UTC
Re: [Xen-users] CPU intensive VM starves IO intensive VMs
> In all of these examples I am using the sedf scheduler with equal CPU > weights for dom0 and all VMs. Desite this, in the 2 VM scenario, the > scheduler ends up giving 99% of the cpu to the VM running the hog app, > practically starving the IO intensive VM.Are you running in work conserving mode or non-work conserving mode? (the extra flag is set to 0 for the latter). I have done similar experiments with good results in non work conserving mode. -- Web/Blog/Gallery: http://floatingsun.net/blog _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Diwaker Gupta
2006-Sep-02 00:00 UTC
Re: [Xen-users] CPU intensive VM starves IO intensive VMs
On 8/30/06, Tim Wood <twwood@gmail.com> wrote:> > On 8/29/06, Apparao, Padmashree K <padmashree.k.apparao@intel.com> wrote: > > What are the number of vcpus so for each VM and what physical cpus are > > the VMs on. I have run similar tests and seen different results from > > what you see. > > For these experiments I am working on a single CPU machine. Having > more cpus would be nice, but I''m assuming setting the sedf CPU weights > should be sufficient for preventing any VM or Dom0 from being starved.Single CPU machines are typically bad for such experiments. In Xen, the execution of any I/O intensive domain gets coupled with the execution of Domain-0 (since any traffic necessarily needs to go through Domain-0 first). This coupling makes scheduling inefficient, affecting the performance. If you have a single processor P4, try enabling HT and putting Dom-0 on a thread by itself. If you can get your hands on an SMP, that would be the best. HTH, Diwaker -- Web/Blog/Gallery: http://floatingsun.net/blog _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks for your replies. Agreed that wasn''t the best hog program, I''ll take a look into slurp. This was all using the work conserving scheduler. I imagine using non-work conserving would help in this particular case, but seems self defeating if the goal is trying to raise overall performance. I''m sure multiple cpus would change things, however I think it is significant that single cpu Xen has such bad IO performance when either a) running a cpu bound process in Dom0 or a 2nd VM or b) running both a cpu bound and IO bound process in one VM. There are two possible explanations - either the scheduler does a poor job of allocating Dom0 and the IO VM when they need it, or the cpu overhead of doing IO in Xen is very high. I notice that when I have an IO bound VM, 100% of the cpu is being used according to XenMon (most in dom0 and some in that VM) while in reality there should be plenty of spare cycles. I''m getting kernel panics in xen-unstable, so I am putting off testing the new scheduler for a bit... _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users