search for: play_dead

Displaying 20 results from an estimated 35 matches for "play_dead".

2006 Feb 08
2
[PATCH] make x86_64 vcpu hotplug work like i386
...mes back. eventually, __cpu_die in drivers/xen/core/smpboot.c spins while waiting for the hypervisor to report that the vcpu is down: while (HYPERVISOR_vcpu_op(VCPUOP_is_up, cpu, NULL)) { current->state = TASK_UNINTERRUPTIBLE; schedule_timeout(HZ/10); } the critical difference is that play_dead in arch/i386/process-xen.c and arch/x86_64/process-xen.c differ. the i386 version makes a VCPUOP_down call to the hypervisor while the x86_64 version schedules a SCHEDOP_yield among other things. plopping the i386 version (patch below) into x86_64/process-xen.c makes hotplugging in x86_64 behavio...
2020 Apr 28
0
[PATCH v3 73/75] x86/sev-es: Support CPU offline/online
From: Joerg Roedel <jroedel at suse.de> Add a play_dead handler when running under SEV-ES. This is needed because the hypervisor can't deliver an SIPI request to restart the AP. Instead the kernel has to issue a VMGEXIT to halt the VCPU. When the hypervisor would deliver and SIPI is wakes up the VCPU instead. Signed-off-by: Joerg Roedel <jroedel...
2011 Feb 23
0
[PATCH] Fixing mwait usage when doing cpu offline
Hi, Keir, In debugging the issue "system hang when doing cpu offline", I identified a situation that could cause a dead lock. The scenario is: mwait_idle_with_hint inside play_dead will access per cpu variable, which causes #PF. The #PF handler will use printk, which will schedule a tasklet. In scheduling a tasklet, per cpu variables are needed, otherwise, there will be another #PF. These series of #PF produce a dead lock around tasklet_lock. This patch avoids using per_cpu...
2020 Jul 24
0
[PATCH v5 71/75] x86/head/64: Rename start_cpu0
...t a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index a708107688a2..352311c5d8d1 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -309,15 +309,15 @@ SYM_CODE_END(secondary_startup_64) #ifdef CONFIG_HOTPLUG_CPU /* - * Boot CPU0 entry point. It's called from play_dead(). Everything has been set + * CPU entry point. It's called from play_dead(). Everything has been set * up already except stack. We just set up stack here. Then call * start_secondary() via .Ljump_to_C_code. */ -SYM_CODE_START(start_cpu0) +SYM_CODE_START(start_cpu) UNWIND_HINT_EMPTY m...
2020 Aug 24
0
[PATCH v6 72/76] x86/head/64: Rename start_cpu0
...t a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index a708107688a2..352311c5d8d1 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -309,15 +309,15 @@ SYM_CODE_END(secondary_startup_64) #ifdef CONFIG_HOTPLUG_CPU /* - * Boot CPU0 entry point. It's called from play_dead(). Everything has been set + * CPU entry point. It's called from play_dead(). Everything has been set * up already except stack. We just set up stack here. Then call * start_secondary() via .Ljump_to_C_code. */ -SYM_CODE_START(start_cpu0) +SYM_CODE_START(start_cpu) UNWIND_HINT_EMPTY m...
2007 Apr 18
2
[RFC, PATCH 14/24] i386 Vmi reboot fixes
...86/kernel/process.c =================================================================== --- linux-2.6.16-rc5.orig/arch/i386/kernel/process.c 2006-03-08 11:38:06.000000000 -0800 +++ linux-2.6.16-rc5/arch/i386/kernel/process.c 2006-03-08 11:38:09.000000000 -0800 @@ -156,7 +156,7 @@ static inline void play_dead(void) */ local_irq_disable(); while (1) - halt(); + shutdown_halt(); } #else static inline void play_dead(void) Index: linux-2.6.16-rc5/arch/i386/kernel/reboot.c =================================================================== --- linux-2.6.16-rc5.orig/arch/i386/kernel/reboot.c 2006-...
2007 Apr 18
2
[RFC, PATCH 14/24] i386 Vmi reboot fixes
...86/kernel/process.c =================================================================== --- linux-2.6.16-rc5.orig/arch/i386/kernel/process.c 2006-03-08 11:38:06.000000000 -0800 +++ linux-2.6.16-rc5/arch/i386/kernel/process.c 2006-03-08 11:38:09.000000000 -0800 @@ -156,7 +156,7 @@ static inline void play_dead(void) */ local_irq_disable(); while (1) - halt(); + shutdown_halt(); } #else static inline void play_dead(void) Index: linux-2.6.16-rc5/arch/i386/kernel/reboot.c =================================================================== --- linux-2.6.16-rc5.orig/arch/i386/kernel/reboot.c 2006-...
2008 Sep 12
0
[PATCH] FLush pending softirqs when cpu offline
...rq before entering stop_machine softirq. So this softirq_A should be handled by the dying cpu after exiting from stop_machine context. However, scheduling to idle leaves no change for softirqs other than the shedule softirq itself to execute. This patch will flush these softirqs at the beginning of play_dead. Best Regards Haitao Shan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2013 Jun 03
0
[PATCH] xen/smp: Fixup NOHZ per cpu data when onlining an offline CPU.
The xen_play_dead is an undead function. When the vCPU is told to offline it ends up calling xen_play_dead wherin it calls the VCPUOP_down hypercall which offlines the vCPU. However, when the vCPU is onlined back, it resumes execution right after VCPUOP_down hypercall. That was OK (albeit the API for play_dead assu...
2008 Sep 09
29
[PATCH 1/4] CPU online/offline support in Xen
This patch implements cpu offline feature. Best Regards Haitao Shan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2012 Mar 09
10
[PATCH 0 of 9] (v2) arm: SMP boot
This patch series implements SMP boot for arch/arm, as far as getting all CPUs up and running the idle loop. Changes from v1: - moved barriers out of loop in udelay() - dropped broken GIC change in favour of explanatory comment - made the increment of ready_cpus atomic (I couldn''t move the increment to before signalling the next CPU because the PT switch has to happen between
2020 Jul 24
86
[PATCH v5 00/75] x86: SEV-ES Guest Support
From: Joerg Roedel <jroedel at suse.de> Hi, here is a rebased version of the latest SEV-ES patches. They are now based on latest tip/master instead of upstream Linux and include the necessary changes. Changes to v4 are in particular: - Moved early IDT setup code to idt.c, because the idt_descr and the idt_table are now static - This required to make stack protector work early (or
2020 Jul 14
92
[PATCH v4 00/75] x86: SEV-ES Guest Support
From: Joerg Roedel <jroedel at suse.de> Hi, here is the fourth version of the SEV-ES Guest Support patches. I addressed the review comments sent to me for the previous version and rebased the code v5.8-rc5. The biggest change in this version is the IST handling code for the #VC handler. I adapted the entry code for the #VC handler to the big pile of entry code changes merged into
2020 Jul 14
92
[PATCH v4 00/75] x86: SEV-ES Guest Support
From: Joerg Roedel <jroedel at suse.de> Hi, here is the fourth version of the SEV-ES Guest Support patches. I addressed the review comments sent to me for the previous version and rebased the code v5.8-rc5. The biggest change in this version is the IST handling code for the #VC handler. I adapted the entry code for the #VC handler to the big pile of entry code changes merged into
2020 Feb 11
83
[RFC PATCH 00/62] Linux as SEV-ES Guest Support
Hi, here is the first public post of the patch-set to enable Linux to run under SEV-ES enabled hypervisors. The code is mostly feature-complete, but there are still a couple of bugs to fix. Nevertheless, given the size of the patch-set, I think it is about time to ask for initial feedback of the changes that come with it. To better understand the code here is a quick explanation of SEV-ES first.
2020 Feb 11
83
[RFC PATCH 00/62] Linux as SEV-ES Guest Support
Hi, here is the first public post of the patch-set to enable Linux to run under SEV-ES enabled hypervisors. The code is mostly feature-complete, but there are still a couple of bugs to fix. Nevertheless, given the size of the patch-set, I think it is about time to ask for initial feedback of the changes that come with it. To better understand the code here is a quick explanation of SEV-ES first.
2020 Aug 24
96
[PATCH v6 00/76] x86: SEV-ES Guest Support
From: Joerg Roedel <jroedel at suse.de> Hi, here is the new version of the SEV-ES client enabling patch-set. It is based on the latest tip/master branch and contains the necessary changes. In particular those ar: - Enabling CR4.FSGSBASE early on supported processors so that early #VC exceptions on APs can be handled. - Add another patch (patch 1) to fix a KVM frame-size build
2020 Apr 28
116
[PATCH v3 00/75] x86: SEV-ES Guest Support
Hi, here is the next version of changes to enable Linux to run as an SEV-ES guest. The code was rebased to v5.7-rc3 and got a fair number of changes since the last version. What is SEV-ES ============== SEV-ES is an acronym for 'Secure Encrypted Virtualization - Encrypted State' and means a hardware feature of AMD processors which hides the register state of VCPUs to the hypervisor by
2020 Apr 28
116
[PATCH v3 00/75] x86: SEV-ES Guest Support
Hi, here is the next version of changes to enable Linux to run as an SEV-ES guest. The code was rebased to v5.7-rc3 and got a fair number of changes since the last version. What is SEV-ES ============== SEV-ES is an acronym for 'Secure Encrypted Virtualization - Encrypted State' and means a hardware feature of AMD processors which hides the register state of VCPUs to the hypervisor by
2020 Sep 07
84
[PATCH v7 00/72] x86: SEV-ES Guest Support
From: Joerg Roedel <jroedel at suse.de> Hi, here is a new version of the SEV-ES Guest Support patches for x86. The previous versions can be found as a linked list starting here: https://lore.kernel.org/lkml/20200824085511.7553-1-joro at 8bytes.org/ I updated the patch-set based on ther review comments I got and the discussions around it. Another important change is that the early IDT