Ryan Grimm
2006-Feb-08 21:43 UTC
[Xen-devel] [PATCH] make x86_64 vcpu hotplug work like i386
hi, i386 vcpu hotplug seems to work reliably but x86_64 does not and i think i have discovered why. in x86-64, a cpu within a domu can be removed with vcpu-set but subsequent calls do nothing. after xenwatch_thread grabs the event triggered by the write to the store, it calls the registered handler and never comes back. eventually, __cpu_die in drivers/xen/core/smpboot.c spins while waiting for the hypervisor to report that the vcpu is down: while (HYPERVISOR_vcpu_op(VCPUOP_is_up, cpu, NULL)) { current->state = TASK_UNINTERRUPTIBLE; schedule_timeout(HZ/10); } the critical difference is that play_dead in arch/i386/process-xen.c and arch/x86_64/process-xen.c differ. the i386 version makes a VCPUOP_down call to the hypervisor while the x86_64 version schedules a SCHEDOP_yield among other things. plopping the i386 version (patch below) into x86_64/process-xen.c makes hotplugging in x86_64 behavior like i386. does anyone know why the x86_64 play_dead function is in the current state? thanks, ryan Signed-off-by: Ryan Grimm <grimm@us.ibm.com> diff -r 974ed9f73641 linux-2.6-xen-sparse/arch/x86_64/kernel/process-xen.c --- a/linux-2.6-xen-sparse/arch/x86_64/kernel/process-xen.c Wed Feb 8 16:27:32 2006 +++ b/linux-2.6-xen-sparse/arch/x86_64/kernel/process-xen.c Wed Feb 8 09:32:46 2006 @@ -53,6 +53,7 @@ #include <asm/kdebug.h> #include <xen/interface/dom0_ops.h> #include <xen/interface/physdev.h> +#include <xen/interface/vcpu.h> #include <asm/desc.h> #include <asm/proto.h> #include <asm/hardirq.h> @@ -143,22 +144,7 @@ /* We halt the CPU with physical CPU hotplug */ static inline void play_dead(void) { - idle_task_exit(); - wbinvd(); - mb(); - /* Ack it */ - __get_cpu_var(cpu_state) = CPU_DEAD; - - /* We shouldn''t have to disable interrupts while dead, but - * some interrupts just don''t seem to go away, and this makes - * it "work" for testing purposes. */ - /* Death loop */ - while (__get_cpu_var(cpu_state) != CPU_UP_PREPARE) - HYPERVISOR_sched_op(SCHEDOP_yield, 0); - - local_irq_disable(); - __flush_tlb_all(); - cpu_set(smp_processor_id(), cpu_online_map); + HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL); local_irq_enable(); } #else _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2006-Feb-08 23:29 UTC
Re: [Xen-devel] [PATCH] make x86_64 vcpu hotplug work like i386
On 8 Feb 2006, at 21:43, Ryan Grimm wrote:> the critical difference is that play_dead in arch/i386/process-xen.c > and > arch/x86_64/process-xen.c differ. the i386 version makes a VCPUOP_down > call to the hypervisor while the x86_64 version schedules a > SCHEDOP_yield among other things. > > plopping the i386 version (patch below) into x86_64/process-xen.c makes > hotplugging in x86_64 behavior like i386. does anyone know why the > x86_64 play_dead function is in the current state?Noone bothered to keep it in sync with the i386 version (and the ''common'' hotplug changes in drivers/xen/core/smpboot.c). That would probably be my fault. :-) I''ve checked in a fixed up patch that still calls idle_task_exit(), and adds a call to it in i386''s play_dead function, and also enables HOTPLUG_CPU in our x86_64 defconfigs. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ryan Grimm
2006-Feb-17 20:05 UTC
Re: [Xen-devel] [PATCH] make x86_64 vcpu hotplug work like i386
On Thu, Feb 16, 2006 at 11:49:00AM +0000, Keir Fraser wrote:> > On 15 Feb 2006, at 23:04, Ryan Grimm wrote: > > >This patch allows a domain''s vcpus to increase beyond the max (up to > >CONFIG_NR_CPUS) set at creation time by making 3 changes: > > I''d prefer to keep the current Xen mechanisms, but extend xend and/or > config file formats so that we can distinguish max_vcpus from > initial_vcpus. Currently the two values are conflated. Then you can set > max_vcpus as high as you like, but xenstore will tell the guest how > many CPUs to bring up during boot.One drawback of this is that the store is not up for dom0''s creation. So, i guess the two values could not apply to dom0?> If we want a ''hard limit'' check in Xen (kind of like we have a > per-domain memory limit) to ensure that guests do not sneakily bring up > CPUs that we didn''t ask them to, then we can add that but it''s an > orthogonal change (i.e., different patch) to what you are trying to do > here.So, you''re saying that the config file could specify max_vcpus to say, 8, and initial to say, 2. Then, there would need to be another value inside of XEN, that would be the hard limit. This could be enforced via another dom0_op, which sets the hard limit, and a vcpu_op, which would tell the domain whether it was allowed to bring up another cpu. How does this approach sound to you? I think the benefits of the approach that I submitted are that it makes a very small change in XEN and brings smpboot.c to more closely mirror the way linux does hot add, in terms of the mappings. Is it the use of DOM0_MAX_VCPUS after domain creation that you find particularly ugly? Thanks, Ryan sorry for the resent, forgot to CC xen-devel earlier> > -- Keir >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Maybe Matching Threads
- [PATCH] xen/smp: Fixup NOHZ per cpu data when onlining an offline CPU.
- [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()
- [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()
- [PATCH] 0/2 VCPU creation and allocation
- [PATCH 1/7] xen: vNUMA support for PV guests