In order to prefer node local memory for a domain the numa node locality info must be built according to the cpus belonging to the cpupool of the domain. Changes since v1: - switch to dynamically allocated cpumasks in domain_update_node_affinity() - introduce and use common macros for selecting cpupool based cpumasks Signed-off-by: juergen.gross@ts.fujitsu.com 8 files changed, 44 insertions(+), 27 deletions(-) xen/common/cpupool.c | 9 +++++++++ xen/common/domain.c | 24 ++++++++++++++++++++---- xen/common/domctl.c | 2 +- xen/common/sched_credit.c | 6 ++---- xen/common/sched_credit2.c | 2 -- xen/common/sched_sedf.c | 8 +++----- xen/common/schedule.c | 16 +++++----------- xen/include/xen/sched-if.h | 4 ++++ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 23.01.12 at 13:12, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote: > - introduce and use common macros for selecting cpupool based cpumasksI had hoped that you would do this as a separate prerequisite patch, but that it''s in here I think there''s no need to break it up.>@@ -333,23 +334,38 @@ struct domain *domain_create( > > void domain_update_node_affinity(struct domain *d) > { >- cpumask_t cpumask; >+ cpumask_var_t cpumask; >+ cpumask_var_t online_affinity; >+ const cpumask_t *online; > nodemask_t nodemask = NODE_MASK_NONE; > struct vcpu *v; > unsigned int node; > >- cpumask_clear(&cpumask); >+ if (!zalloc_cpumask_var(&cpumask)) >+ return; >+ if (!alloc_cpumask_var(&online_affinity))This doesn''t get freed at the end of the function. Also, with the rest of the function being formatted properly, you ought to insert spaces after the initial and before the final parentheses of the if-s. Jan>+ { >+ free_cpumask_var(cpumask); >+ return; >+ } >+ >+ online = cpupool_online_cpumask(d->cpupool); > spin_lock(&d->node_affinity_lock); > > for_each_vcpu ( d, v ) >- cpumask_or(&cpumask, &cpumask, v->cpu_affinity); >+ { >+ cpumask_and(online_affinity, v->cpu_affinity, online); >+ cpumask_or(cpumask, cpumask, online_affinity); >+ } > > for_each_online_node ( node ) >- if ( cpumask_intersects(&node_to_cpumask(node), &cpumask) ) >+ if ( cpumask_intersects(&node_to_cpumask(node), cpumask) ) > node_set(node, nodemask); > > d->node_affinity = nodemask; > spin_unlock(&d->node_affinity_lock); >+ >+ free_cpumask_var(cpumask); > } > >
George Dunlap
2012-Jan-23 16:08 UTC
Re: [PATCH] Reflect cpupool in numa node affinity (v2)
On Mon, Jan 23, 2012 at 12:55 PM, Jan Beulich <JBeulich@suse.com> wrote:>>>> On 23.01.12 at 13:12, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote: >> - introduce and use common macros for selecting cpupool based cpumasks > > I had hoped that you would do this as a separate prerequisite patch, > but that it''s in here I think there''s no need to break it up.But it would be a lot easier to check the logic if the two patches were separate; both now, and when someone in the future tries is doing archaeology to figure out what''s going on. I won''t NACK a patch that has them together, but I would strongly encourage you make them two patches. :-) -George> >>@@ -333,23 +334,38 @@ struct domain *domain_create( >> >> void domain_update_node_affinity(struct domain *d) >> { >>- cpumask_t cpumask; >>+ cpumask_var_t cpumask; >>+ cpumask_var_t online_affinity; >>+ const cpumask_t *online; >> nodemask_t nodemask = NODE_MASK_NONE; >> struct vcpu *v; >> unsigned int node; >> >>- cpumask_clear(&cpumask); >>+ if (!zalloc_cpumask_var(&cpumask)) >>+ return; >>+ if (!alloc_cpumask_var(&online_affinity)) > > This doesn''t get freed at the end of the function. Also, with the rest > of the function being formatted properly, you ought to insert spaces > after the initial and before the final parentheses of the if-s. > > Jan > >>+ { >>+ free_cpumask_var(cpumask); >>+ return; >>+ } >>+ >>+ online = cpupool_online_cpumask(d->cpupool); >> spin_lock(&d->node_affinity_lock); >> >> for_each_vcpu ( d, v ) >>- cpumask_or(&cpumask, &cpumask, v->cpu_affinity); >>+ { >>+ cpumask_and(online_affinity, v->cpu_affinity, online); >>+ cpumask_or(cpumask, cpumask, online_affinity); >>+ } >> >> for_each_online_node ( node ) >>- if ( cpumask_intersects(&node_to_cpumask(node), &cpumask) ) >>+ if ( cpumask_intersects(&node_to_cpumask(node), cpumask) ) >> node_set(node, nodemask); >> >> d->node_affinity = nodemask; >> spin_unlock(&d->node_affinity_lock); >>+ >>+ free_cpumask_var(cpumask); >> } >> >> > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel