Sam Gill
2005-Apr-14 17:34 UTC
Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, add vcpu to cpu map
Message: 6 Date: Thu, 14 Apr 2005 11:24:07 -0500 From: Ryan Harper <ryanh@us.ibm.com> Subject: Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support vcpus, add vcpu to cpu map To: Ian Pratt <m+Ian.Pratt@cl.cam.ac.uk> Cc: Ryan Harper <ryanh@us.ibm.com>, xen-devel@lists.xensource.com Message-ID: <20050414162407.GG27571@us.ibm.com> Content-Type: text/plain; charset=us-ascii * Ian Pratt <m+Ian.Pratt@cl.cam.ac.uk> [2005-04-14 10:50]:>>> > The following patch updates the dom0 pincpu operation to read >>> > the VCPU value from the xend interface rather than >>> > hard-coding the exec_domain to 0. This prevented pinning >>> > VCPUS other than 0 to a particular cpu. I added the number >>> > of VCPUS to the main xm list output and also included a new >>> > sub-option to xm list to display the VCPU to CPU mapping. >>> > While working on the pincpu code, I fixed an out-of-bounds >>> > indexing for the pincpu operation that wasn''t previously >>> > exposed since the vcpu/exec_domain value was hard-coded to 0. >> >> >> >> Ryan, good progress, but I''d like to propose a couple of extentions: >> >> It would be useful if you could update it so that pincpu enabled you to >> specify a set of physical CPUs for each VCPU e.g. >> >> "xm pincpu mydom 1 2,4-6" which would allow VCPU 1 of mydom to run on >> CPUs 2,4 and 5 but no others. -1 would still mean "run anywhere". Having >> this functionality is really important before we can implement any kind >> of CPU load ballancer. > >> Interesting idea. I don''t see anything in the schedulers that would > take advantage of that sort of definition. AFAIK, exec_domains are > never migrated unless told to do so via pincpu. Does the new scheduler > do this? Or is this more of setting up the rules that the load balancer > would query to find out where it can migrate vcpus?>> Secondly, I think it would be really good if we could have some >> hierarchy in CPU names. Imagine a 4 socket system with dual core hyper >> threaded CPUs. It would be nice to be able to specify the 3rd socket, >> 1st core, 2nd hyperthread as CPU "2.0.1". >> >> Where we''re on a system without one of the levels of hierarchy, we just >> miss it off. E.g. a current SMP Xeon box would be "x.y". This would be >> much less confusing than the current scalar representation. > >> I like the idea of being able to specify "where" the vcpu runs more > explicitly than ''cpu 0'', which does not give any indication of physical > cpu characteristics. We would probably need to still provide a simple > mapping, but allow the pincpu interface to support a more specific > target as well as the more generic.> 2-way hyperthreaded box: > CPU SOCKET.CORE.THREAD > 0 0.0.0 > 1 0.0.1 > 2 1.0.0 > 3 1.0.1Just my opinion, but for end users, and people who are going to have to configure this whole system, it would be a far greater impact to just develop a simple tool that just shows you how many cpus you have to work with. (also a debugging tool, to see if your cpus are registering) such as "xm pincpu-show" and "xm pincpu-show-details" for a more verbose listing and once you developed the function that could return those values, you could use that function to map different domains to different cpus, or different cpus to different domains. Then the next step would be creating some helper functions "xm pincpu-add" so you could add a cpu to a domain, or "xm pincpu-move" to move a cpu from one domain to another. In addition you could have "xm pincpu-lock"/"xm pincpu-unlock" which would only allow one single domain to access that cpu. I am just thinking that maybe if you detail (if you have already not done so) what you want the end result to be, than it might be easier to figure out how to implement the lower level functions more efficiently. Thanks, Sam Gill _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ryan Harper
2005-Apr-14 17:51 UTC
Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, add vcpu to cpu map
* Sam Gill <samg@seven4sky.com> [2005-04-14 12:31]:> tool that just shows > you how many cpus you have to work with. (also a debugging tool, to seeYeah, I think we should add something that better shows the available resources. Currently the total number of Physical CPUs a system has isn''t really available in an obvious location.> such as "xm pincpu-show" and "xm pincpu-show-details" for a more verbose > listingWhat would these look like?> Then the next step would be creating some helper functions "xm > pincpu-add" so you could add a cpu to> a domain, or "xm pincpu-move" to move a cpu from one domain to another. > In addition you could have> "xm pincpu-lock"/"xm pincpu-unlock" which would only allow one single > domain to access that cpu.I think the mapping that Ian mentioned was needed for load-balancing would achieve that, but we certainly could create an interface wrapper, like lock/unlock that was translated into the correct mapping command.> I am just thinking that maybe if you detail (if you have already not > done so) what you want the end result to > be, than it might be easier to figure out how to implement the lower > level functions more efficiently.No, this is good things to be talking about. The goal of this patch was to allow us to pin VCPUs mainly so we can test space-sharing versus time-sharing of VCPUs. That is, if we have a 4-way SMP box, with two domUs, each with four VCPUs, what is the perf difference between domUs each getting 2 physical cpus to run their 4 VCPUs versus domUs having access to all 4 physical cpus on which to run their 4 VCPUs. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 ryanh@us.ibm.com _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mark Williamson
2005-Apr-14 17:55 UTC
Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, add vcpu to cpu map
> Yeah, I think we should add something that better shows the available > resources. Currently the total number of Physical CPUs a system has > isn''t really available in an obvious location.xm info lists this as "packages", I think. If the enumeration is done in a standardised way then it''s possible to work out in userspace what CPU id is where but it''s not at all obvious to the user right now. Would definitely be good for the management tools to give more information to the user on this stuff. Cheers, Mark> > such as "xm pincpu-show" and "xm pincpu-show-details" for a more verbose > > listing > > What would these look like? > > > Then the next step would be creating some helper functions "xm > > pincpu-add" so you could add a cpu to > > > > > > a domain, or "xm pincpu-move" to move a cpu from one domain to another. > > In addition you could have > > > > "xm pincpu-lock"/"xm pincpu-unlock" which would only allow one single > > domain to access that cpu. > > I think the mapping that Ian mentioned was needed for load-balancing > would achieve that, but we certainly could create an interface wrapper, > like lock/unlock that was translated into the correct mapping command. > > > I am just thinking that maybe if you detail (if you have already not > > done so) what you want the end result to > > be, than it might be easier to figure out how to implement the lower > > level functions more efficiently. > > No, this is good things to be talking about. The goal of this patch was > to allow us to pin VCPUs mainly so we can test space-sharing versus > time-sharing of VCPUs. That is, if we have a 4-way SMP box, with two > domUs, each with four VCPUs, what is the perf difference between domUs each > getting 2 physical cpus to run their 4 VCPUs versus domUs having access > to all 4 physical cpus on which to run their 4 VCPUs._______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Sam Gill
2005-Apr-14 18:40 UTC
Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, add vcpu to cpu map
Mark Williamson wrote:>>Yeah, I think we should add something that better shows the available >>resources. Currently the total number of Physical CPUs a system has >>isn''t really available in an obvious location. >> >> > >xm info lists this as "packages", I think. > >If the enumeration is done in a standardised way then it''s possible to work >out in userspace what CPU id is where but it''s not at all obvious to the user >right now. Would definitely be good for the management tools to give more >information to the user on this stuff. > >Cheers, >Mark > > > >>>such as "xm pincpu-show" and "xm pincpu-show-details" for a more verbose >>>listing >>> >>> >>What would these look like? >> >>--JUST general -- # xm pincpu-show cpu configuration ---- id ---- status -- assignment registered cpu1 0 lock vmid1 registered cpu2 1 unlock none registered cpu3 2 unlock vmid1,vmid2 registered cpu4 3 lock vmid3 then you could go about pincpu-add vmid4 1 which would assign cpu2 to vmid4 # xm pincpu-show-details would explain more about the sockets and which cpus were hyperthreading and any core components, and other low-level hardware about the cpus you are using processor cpu0 socket: 0 hyperthreaded instance: N core: 0 processor cpu1 socket: 0 hyperthreaded instance: Y core: 0 so its a much more detailed, very descriptive type of listing the dirfference between brctl show and brctl showstp xen-br0 along those lines>> >> >>>Then the next step would be creating some helper functions "xm >>>pincpu-add" so you could add a cpu to >>> >>> >>>a domain, or "xm pincpu-move" to move a cpu from one domain to another. >>>In addition you could have >>> >>>"xm pincpu-lock"/"xm pincpu-unlock" which would only allow one single >>>domain to access that cpu. >>> >>> >>I think the mapping that Ian mentioned was needed for load-balancing >>would achieve that, but we certainly could create an interface wrapper, >>like lock/unlock that was translated into the correct mapping command. >> >> >> >>>I am just thinking that maybe if you detail (if you have already not >>>done so) what you want the end result to >>>be, than it might be easier to figure out how to implement the lower >>>level functions more efficiently. >>> >>> >>No, this is good things to be talking about. The goal of this patch was >>to allow us to pin VCPUs mainly so we can test space-sharing versus >>time-sharing of VCPUs. That is, if we have a 4-way SMP box, with two >>domUs, each with four VCPUs, what is the perf difference between domUs each >>getting 2 physical cpus to run their 4 VCPUs versus domUs having access >>to all 4 physical cpus on which to run their 4 VCPUs. >> >>Yeah sorry about that, I was just commenting due to past exp, which says that if you wait too long then its too late. Or its better to get out more ideas sooner, than wait till its already going in a direction, and its too hard or too difficult to change. Thanks, -Sam _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel