Hey George, I was wondering if you could explain in simple terms how the scheduler would handle per-physical CPU when there are say 16 guests (each guest is using one VCPU), 32 physical CPUs and dom0 is not restricted to any CPUs. Would the scheduler per physical CPU schedule: guest, dom0, guest, dom0, and so on; or would it be more random? (I assume that both guest and dom0 would do a hypercall yield too). Thanks!
On Tue, Feb 7, 2012 at 8:12 PM, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:> Hey George, > > I was wondering if you could explain in simple terms how the scheduler would > handle per-physical CPU when there are say 16 guests (each guest is using > one VCPU), 32 physical CPUs and dom0 is not restricted to any CPUs. Would > the scheduler per physical CPU schedule: guest, dom0, guest, dom0, and so > on; or would it be more random? (I assume that both guest and dom0 would do > a hypercall yield too).The scheduling would be random. As far as I know, none of the schedulers (sedf, credit1, or credit2) treat domain 0 differently from any other domain. Even guests which are very busy end up blocking quite a bit, so the total runtime ends up being fairly random anyway. Also, the dom0 vcpus are not pinned unless you specify dom0_pin_vcpus on the xen command-line; so by default they will migrate freely around the various cores. Does that answer your question? -George
On Wed, Feb 08, 2012 at 12:18:39PM +0000, George Dunlap wrote:> On Tue, Feb 7, 2012 at 8:12 PM, Konrad Rzeszutek Wilk > <konrad.wilk@oracle.com> wrote: > > Hey George, > > > > I was wondering if you could explain in simple terms how the scheduler would > > handle per-physical CPU when there are say 16 guests (each guest is using > > one VCPU), 32 physical CPUs and dom0 is not restricted to any CPUs. Would > > the scheduler per physical CPU schedule: guest, dom0, guest, dom0, and so > > on; or would it be more random? (I assume that both guest and dom0 would do > > a hypercall yield too). > > The scheduling would be random. As far as I know, none of the > schedulers (sedf, credit1, or credit2) treat domain 0 differently from > any other domain. Even guests which are very busy end up blocking > quite a bit, so the total runtime ends up being fairly random anyway. > Also, the dom0 vcpus are not pinned unless you specify dom0_pin_vcpus > on the xen command-line; so by default they will migrate freely around > the various cores. > > Does that answer your question? >I don''t know if it''s relevant to this discussion but I tend to often increase the credit scheduler weight of dom0 vcpus to guarantee smooth operation of dom0. http://wiki.xen.org/wiki/Xen_Best_Practices -- Pasi
On Wed, Feb 8, 2012 at 2:22 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> I don''t know if it''s relevant to this discussion but I tend to often > increase the credit scheduler weight of dom0 vcpus to guarantee smooth operation of dom0.In fact, XenServer has a patch that will automatically adjust dom0''s weight based on the weight of other VMs running on the system, in an attempt to make sure dom0 can get enough cpu time. I haven''t submitted it to the list, because I don''t think it''s really appropriate for the open-source tree. But I''ll dig it out and post it as an RFC; even if it''s not ultimately accepted, it might not be bad to have the patch floating around. -George
Possibly Parallel Threads
- xl doesn't honour the parameter cpu_weight from my config file while xm does honour it
- xl doesn't honour the parameter cpu_weight from my config file while xm does honour it
- credit2 in xen
- Xen credit scheduler question
- kernel log flooded with: xen_balloon: reserve_additional_memory: add_memory() failed: -17