Justin Weaver
2013-Dec-14 18:15 UTC
[PATCH v2] xen: sched: introduce hard and soft affinity in credit 2 scheduler
Modified function runq_candidate in the credit 2 scheduler to have it consider hard and soft affinity when choosing the next vCPU from the run queue to run on the given pCPU. Function now chooses the vCPU with the most credit that has hard affinity and maybe soft affinity for the given pCPU. If it does not have soft affinity and there is another vCPU that prefers to run on the given pCPU, then as long as it has at least a certain amount of credit (currently defined as half of CSCHED_CREDIT_INIT, but more testing is needed to determine the best value) then it is chosen instead. This patch depends on Dario Faggioli''s patch set "Implement vcpu soft affinity for credit1" currently in version 5 for vcpu->cpu_hard_affinity and vcpu->cpu_soft_affinity defined in sched.h. --- Changes since v1: * fixed the check for soft affinity by adding a check for a full cpu_soft_affinity mask --- xen/common/sched_credit2.c | 32 +++++++++++++++++++++++++++++--- 1 file changed, 29 insertions(+), 3 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 4e68375..d337cdd 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -116,6 +116,10 @@ * ATM, set so that highest-weight VMs can only run for 10ms * before a reset event. */ #define CSCHED_CREDIT_INIT MILLISECS(10) +/* Minimum amount of credit needed for a vcpu with soft + affinity for a given cpu to be picked from the run queue + over a vcpu with more credit but only hard affinity. */ +#define CSCHED_MIN_CREDIT_PREFER_SA MILLISECS(5) /* Carryover: How much "extra" credit may be carried over after * a reset. */ #define CSCHED_CARRYOVER_MAX CSCHED_MIN_TIMER @@ -1615,6 +1619,7 @@ runq_candidate(struct csched_runqueue_data *rqd, { struct list_head *iter; struct csched_vcpu *snext = NULL; + bool_t found_snext_w_hard_affinity = 0; /* Default to current if runnable, idle otherwise */ if ( vcpu_runnable(scurr->vcpu) ) @@ -1626,6 +1631,11 @@ runq_candidate(struct csched_runqueue_data *rqd, { struct csched_vcpu * svc = list_entry(iter, struct csched_vcpu, runq_elem); + /* If this is not allowed to run on this processor based on its + * hard affinity mask, continue to the next vcpu on the run queue */ + if ( !cpumask_test_cpu(cpu, &svc->cpu_hard_affinity) ) + continue; + /* If this is on a different processor, don''t pull it unless * its credit is at least CSCHED_MIGRATE_RESIST higher. */ if ( svc->vcpu->processor != cpu @@ -1633,13 +1643,29 @@ runq_candidate(struct csched_runqueue_data *rqd, continue; /* If the next one on the list has more credit than current - * (or idle, if current is not runnable), choose it. */ - if ( svc->credit > snext->credit ) + * (or idle, if current is not runnable), choose it. Only need + * to do this once since run queue is in credit order. */ + if ( !found_snext_w_hard_affinity + && svc->credit > snext->credit ) + { + snext = svc; + found_snext_w_hard_affinity = 1; + } + + /* Is there enough credit left in this vcpu to continue + * considering soft affinity? */ + if ( svc->credit < CSCHED_MIN_CREDIT_PREFER_SA ) + break; + + /* Does this vcpu prefer to run on this cpu? */ + if ( !cpumask_full(svc->cpu_soft_affinity) + && cpumask_test_cpu(cpu, &svc->cpu_soft_affinity) ) snext = svc; + else + continue; /* In any case, if we got this far, break. */ break; - } return snext; -- 1.7.10.4