Displaying 6 results from an estimated 6 matches for "vcpu_set_affinity".
2007 Jun 27
10
[PATCH 6/10] Allow vcpu to pause self
Add self pause ability, which is required by vcpu0/dom0 when
running on a AP. This can''t be satisfied by existing interface,
since the new flag also serves as a sync point.
Signed-off-by Kevin Tian <kevin.tian@intel.com>
diff -r d5315422dbc8 xen/common/domain.c
--- a/xen/common/domain.c Mon May 14 18:35:31 2007 -0400
+++ b/xen/common/domain.c Mon May 14 20:21:04 2007 -0400
@@
2012 Sep 18
6
[PATCH 2/5] Xen/MCE: vMCE injection
...{
- cpumask_copy(d->vcpu[0]->cpu_affinity_tmp,
- d->vcpu[0]->cpu_affinity);
- mce_printk(MCE_VERBOSE, "MCE: CPU%d set affinity, old %d\n",
- cpu, d->vcpu[0]->processor);
- vcpu_set_affinity(d->vcpu[0], cpumask_of(cpu));
- vcpu_kick(d->vcpu[0]);
- }
- else
- {
- mce_printk(MCE_VERBOSE,
- "MCE: Kill PV guest with No MCE handler\n");
- domain_crash(d);
- }
+...
2008 Jun 16
8
Vcpu allocation for a newly created domU
Hi all,
I am having confusion regarding the way a newly created domain is
allocated vcpu.
Initially during dom0 creation alloc_vcpu is called to create vcpu
structs for all the available cpu''s and assigned to dom0. But its not
the case for domU creation.
1. So how will dom0 relinquish/share vcpu to/with a newly created domU.
Does this happen as part of the shared_info page mapping??
2007 Aug 30
0
[PATCH][Retry 1] 1/4: cpufreq/PowerNow! in Xen: Xen timer changes
...|| ((d->domain_id == 0) && opt_dom0_vcpus_pin) )
+ if ( is_idle_domain(d) || ((d->domain_id == 0) && ( opt_cpufreq || opt_dom0_vcpus_pin) ) )
v->cpu_affinity = cpumask_of_cpu(processor);
else
cpus_setall(v->cpu_affinity);
@@ -254,7 +256,7 @@ int vcpu_set_affinity(struct vcpu *v, cp
{
cpumask_t online_affinity;
- if ( (v->domain->domain_id == 0) && opt_dom0_vcpus_pin )
+ if ( (v->domain->domain_id == 0) && ( opt_dom0_vcpus_pin || opt_cpufreq ) )
return -EINVAL;
cpus_and(online_affinity, *affinity, cpu...
2013 Mar 27
2
[PATCH] x86/S3: Restore broken vcpu affinity on resume (v3)
.../* Bitmask of CPUs which are holding onto this VCPU''s state. */
cpumask_var_t vcpu_dirty_cpumask;
@@ -697,6 +702,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c);
void vcpu_force_reschedule(struct vcpu *v);
int cpu_disable_scheduler(unsigned int cpu);
int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity);
+void restore_vcpu_affinity(struct domain *d);
void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate);
uint64_t get_cpu_idle_time(unsigned int cpu);
--
1.7.9.5
2011 Nov 08
48
Need help with fixing the Xen waitqueue feature
The patch ''mem_event: use wait queue when ring is full'' I just sent out
makes use of the waitqueue feature. There are two issues I get with the
change applied:
I think I got the logic right, and in my testing vcpu->pause_count drops
to zero in p2m_mem_paging_resume(). But for some reason the vcpu does
not make progress after the first wakeup. In my debugging there is one