The unstable tree now includes support for SMP guests, i.e. domains which run on multiple cpus. SMP guests can use between 1 and 32 virtual cpus, even if the machine has fewer physical cpus. The code is highly experimental and performance will improve over time. To use SMP guests: - enable option CONFIG_SMP in the Linux 2.6 kernel config - dom0 will boot with upto the number of physical cpus in the machine. - domU will boot with as many cpus as has been configured by setting the XEN_VCPUS environment variable in xend''s environment. - the number of cpus used can be reduced by using the maxcpus= option on the Linux kernel command line. christian ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://productguide.itmanagersjournal.com/ _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> The unstable tree now includes support for SMP guests, i.e. > domains which run on multiple cpus. SMP guests can use between > 1 and 32 virtual cpus, even if the machine has fewer physical cpus. > The code is highly experimental and performance will improve over > time.Cool. Can you elaborate on ''highly experimental''? I''m now running the ''stable'' distribution but would consider using unstable for this feature if it were vaguely reliable. How does migration work between SMP & non SMP servers? I''d assume that running a domain with more cpu''s than physically exist would take a performance hit, but do you know how much? If I migrate a domain from a dual cpu server to a single cpu server, is there an easy way of telling the domain to only use 1 cpu now please? Thanks James ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://productguide.itmanagersjournal.com/ _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Christian Limpach
2004-Dec-15 23:59 UTC
Re: [Xen-devel] SMP guest support in unstable tree.
On Thu, Dec 16, 2004 at 10:50:21AM +1100, James Harper wrote:> Cool. Can you elaborate on ''highly experimental''? I''m now running the > ''stable'' distribution but would consider using unstable for this feature > if it were vaguely reliable.On one hand it seems quite stable (doing kernel compiles for example) but on the other hand it occasionally happens that dom0 doesn''t boot -- once a domain has completed boot, it seems fine though, so far I''ve only seen one lockup of a 32-vcpu domain running on a 4-cpu machine, which is not a very practical setup anyway. Also a malicious SMP guest might be able to crash Xen.> How does migration work between SMP & non SMP servers? I''d assume that > running a domain with more cpu''s than physically exist would take a > performance hit, but do you know how much? If I migrate a domain from a > dual cpu server to a single cpu server, is there an easy way of telling > the domain to only use 1 cpu now please?Migration doesn''t work for SMP guests (at least it''s not tested and somewhat unlikely to work). We haven''t really decided how to do migration for SMP guests yet, one option would be to shutdown all cpus but one and then do a regular migration. We will definitely look into adding cpu hotplug support so that one can add/remove cpus from a running domain. christian ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://productguide.itmanagersjournal.com/ _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> SMP guests can use between > 1 and 32 virtual cpus, even if the machine has fewer physical cpus.I note that the Xen-2.1 roadmap aims for the ability to transparently & dynamically reassign a VM to a new real processor on the same node ("CPU load balancing"). Is this already in place? What are the proposed models for assigning virtual processors to real processors? In particular, is there an aim toward gang scheduling of virtual processors? Are there thoughts on how virtual-multiprocessor guests change resource allocation? For example, if I have a two-virtual-processor guest [A] and a four-virtual-processor guest [B], and I assign them each a 50% share, is it more "correct" for: 1. Each virtual processor to get 1/6 of the total real processor availability; 2. Each virtual processor of A to get 1/4, each of B to get 1/8; 3. The sum of the virtual processors of A to get no more than 1/2, etc.; 4. Something else? I realize all these are easily configurable & nothing needs to be set in stone; am mainly interested if there''s been any off-list discussion of this sort of thing. -- Dr. John Linwood Griffin Research Staff Member, Secure Systems Department IBM T.J. Watson Research Center, Hawthorne, New York, USA JLG at us.ibm.com, http://www.research.ibm.com/people/j/jlg/ ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://productguide.itmanagersjournal.com/ _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Christian Limpach wrote:>The unstable tree now includes support for SMP guests, i.e. >domains which run on multiple cpus. SMP guests can use between >1 and 32 virtual cpus, even if the machine has fewer physical cpus. >The code is highly experimental and performance will improve over >time. > >To use SMP guests: >- enable option CONFIG_SMP in the Linux 2.6 kernel config >- dom0 will boot with upto the number of physical cpus in the machine. >- domU will boot with as many cpus as has been configured by setting > the XEN_VCPUS environment variable in xend''s environment. >- the number of cpus used can be reduced by using the maxcpus= option > on the Linux kernel command line. > > christian >This is great news, and I hope to start experimenting with this asap. I do have some questions: Do you think there would be any room for "dedicated" cpus in a domain, like a one-to-one mapping of physical cpu to domain cpu? I am asking because I think there would be situations where (a) one would want to discretely divide a large system, in particular one with numa characteristics where one could dedicate cpus and memory close to each other and (b) perhaps in this one to one mapping, there might be less overhead of managing cpus in a domain, vs (assuming) some sort of timesharing of a physical cpu to many domains, and even more than one virtual cpu in just one domain. Anyway, I am mostly curious at this point. This is just what I have seen in the ppc/power5 world, a choice of dedicated cpus (however, if they are idle that cpu can be "shared" if desired) or virtual cpus (up to 64 I think) backed by N physical cpus. Andrew Theurer ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Christian Limpach
2005-Jan-05 14:23 UTC
Re: [Xen-devel] SMP guest support in unstable tree.
On Tue, Jan 04, 2005 at 10:41:57AM -0600, Andrew Theurer wrote:> Do you think there would be any room for "dedicated" cpus in a domain, > like a one-to-one mapping of physical cpu to domain cpu? I am asking > because I think there would be situations where (a) one would want to > discretely divide a large system, in particular one with numa > characteristics where one could dedicate cpus and memory close to each > other andWe have a one-to-one mapping (pinning) of virtual cpus to physical cpus -- if you don''t allocate multiple virtual cpus to the same physical cpu, then the physical cpu becomes implicitly dedicated to that domain. This mapping can be changed dynamically, at least at the Xen level -- the tools don''t have support for changing the mapping of SMP guests yet. We also don''t support enforcing allocation policies yet.> (b) perhaps in this one to one mapping, there might be less > overhead of managing cpus in a domain, vs (assuming) some sort of > timesharing of a physical cpu to many domains, and even more than one > virtual cpu in just one domain.I don''t think there''s significant overhead if there''s only a single virtual cpu pinned to one physical cpu so I wouldn''t expect a noticeable performance advantage if we handled this case differently.> Anyway, I am mostly curious at this point. This is just what I have > seen in the ppc/power5 world, a choice of dedicated cpus (however, if > they are idle that cpu can be "shared" if desired) or virtual cpus (up > to 64 I think) backed by N physical cpus.I think we need load balancing software and we also need to get measurements to see what''s the cost of moving virtual cpus between physical cpus (or hyperthreads) and what impact service domains have on the scheduling and load balancing decisions. christian ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Christian Limpach wrote:>On Tue, Jan 04, 2005 at 10:41:57AM -0600, Andrew Theurer wrote: > > >>Do you think there would be any room for "dedicated" cpus in a domain, >>like a one-to-one mapping of physical cpu to domain cpu? I am asking >>because I think there would be situations where (a) one would want to >>discretely divide a large system, in particular one with numa >>characteristics where one could dedicate cpus and memory close to each >>other and >> >> > >We have a one-to-one mapping (pinning) of virtual cpus to physical cpus -- >if you don''t allocate multiple virtual cpus to the same physical cpu, then >the physical cpu becomes implicitly dedicated to that domain. > >OK, great, this is essentially the option I wanted, thanks!>This mapping can be changed dynamically, at least at the Xen level -- the >tools don''t have support for changing the mapping of SMP guests yet. We >also don''t support enforcing allocation policies yet. > > > >>(b) perhaps in this one to one mapping, there might be less >>overhead of managing cpus in a domain, vs (assuming) some sort of >>timesharing of a physical cpu to many domains, and even more than one >>virtual cpu in just one domain. >> >> > >I don''t think there''s significant overhead if there''s only a single >virtual cpu pinned to one physical cpu so I wouldn''t expect a noticeable >performance advantage if we handled this case differently. > >Hopefully soon I can get some performance tests going and we can see if there''s any issues here. My other concern would be on larger (multi numa-node) systems, even with one to one mapping, that the hardware topology (numa) information does not make it to the SMP guest -it would be nice to take advantage of the numa work developed in the linux kernel over that last 2 years. I am not sure exactly what impact this could be.>>Anyway, I am mostly curious at this point. This is just what I have >>seen in the ppc/power5 world, a choice of dedicated cpus (however, if >>they are idle that cpu can be "shared" if desired) or virtual cpus (up >>to 64 I think) backed by N physical cpus. >> >> > >I think we need load balancing software and we also need to get >measurements to see what''s the cost of moving virtual cpus between >physical cpus (or hyperthreads) and what impact service domains have >on the scheduling and load balancing decisions. > >Agreed, thanks for the info. -Andrew Theurer ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Christian Limpach
2005-Jan-05 16:13 UTC
Re: [Xen-devel] SMP guest support in unstable tree.
On Wed, Jan 05, 2005 at 09:06:10AM -0600, Andrew Theurer wrote:> >I don''t think there''s significant overhead if there''s only a single > >virtual cpu pinned to one physical cpu so I wouldn''t expect a noticeable > >performance advantage if we handled this case differently. > > > Hopefully soon I can get some performance tests going and we can see if > there''s any issues here. My other concern would be on larger (multi > numa-node) systems, even with one to one mapping, that the hardware > topology (numa) information does not make it to the SMP guest -it would > be nice to take advantage of the numa work developed in the linux kernel > over that last 2 years. I am not sure exactly what impact this could be.Yes, this is probably even needed on 2-cpu with 2 hyperthreads systems. Right now, all virtual cpus are presented as independent physical cpus to the domains and the domains can''t easily tell if two virtual cpus run on different physical cpus, on different hyperthreads on the same cpu or on the same hyperthread. If we export this information to the guest, we''ll then probably also have to have a way to inform the guest if a virtual cpu is moved to a different hyperthread or physical cpu. christian ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Wed, 5 Jan 2005, Christian Limpach wrote:> On Wed, Jan 05, 2005 at 09:06:10AM -0600, Andrew Theurer wrote: > > >I don''t think there''s significant overhead if there''s only a single > > >virtual cpu pinned to one physical cpu so I wouldn''t expect a noticeable > > >performance advantage if we handled this case differently. > > > > > Hopefully soon I can get some performance tests going and we can see if > > there''s any issues here. My other concern would be on larger (multi > > numa-node) systems, even with one to one mapping, that the hardware > > topology (numa) information does not make it to the SMP guest -it would > > be nice to take advantage of the numa work developed in the linux kernel > > over that last 2 years. I am not sure exactly what impact this could be. > > Yes, this is probably even needed on 2-cpu with 2 hyperthreads systems. > Right now, all virtual cpus are presented as independent physical cpus > to the domains and the domains can''t easily tell if two virtual cpus > run on different physical cpus, on different hyperthreads on the same > cpu or on the same hyperthread. If we export this information to the > guest, we''ll then probably also have to have a way to inform the guest > if a virtual cpu is moved to a different hyperthread or physical cpu.Also consider the NUMA equation. Search l-k for cpusets. ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel