Jeremy/Keir, I''m trying to add vMCA injection to pv_ops dom0. Because currently we didn''t have virtual IST stack support, so I plan to use the kernel stack for vMCE. But Andi told me that this method should have issue if MCE is injected before syscall handler switches to kernel stack. After checking the code, seems this apply in pv_ops dom0, since undo_xen_syscall will switch to user space stack firstly (see following code). I''m not sure if we really need to switch to user space stack, or we can simply place user stack to oldrsp and don''t switch the stack at all, since xen hypervisor has use the kernel stack already. Another option is to add vIST stack, but that requires changes for dom0/xen interface and is a bit complex. I checked the 2.6.18 kernel and seems it have no such issue, because syscall entry in arch/x86_64/kernel/entry-xen.S will use kernel stack directly. (But vMCE injection may have issue still because it use zeroentry). BTW, Jeremy, seems vNMI support is not included in pvops dom0, will it be supported in future? Thanks Yunhong Jiang .macro undo_xen_syscall mov 0*8(%rsp), %rcx mov 1*8(%rsp), %r11 mov 5*8(%rsp), %rsp .endm /* Normal 64-bit system call target */ ENTRY(xen_syscall_target) undo_xen_syscall jmp system_call_after_swapgs ENDPROC(xen_syscall_target) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-Dec-18 21:21 UTC
[Xen-devel] Re: One question to IST stack for PV guest
On 12/18/2009 01:05 AM, Jiang, Yunhong wrote:> Jeremy/Keir, I''m trying to add vMCA injection to pv_ops dom0. Because currently we didn''t have virtual IST stack support, so I plan to use the kernel stack for vMCE. But Andi told me that this method should have issue if MCE is injected before syscall handler switches to kernel stack. After checking the code, seems this apply in pv_ops dom0, since undo_xen_syscall will switch to user space stack firstly (see following code). >What are the requirements here? Are these events delivered to dom0 to indicate that something needs attention on the machine, or are they delivered synchronously to whatever domain is currently running to say that something bad needs immediate attention?> I''m not sure if we really need to switch to user space stack, or we can simply place user stack to oldrsp and don''t switch the stack at all, since xen hypervisor has use the kernel stack already. > > Another option is to add vIST stack, but that requires changes for dom0/xen interface and is a bit complex. >What about making the call a bit like the failsafe callback, which always uses the kernel stack, to deliver these exceptions? That could reshape the kernel stack to conform to the normal stack frame and then call the usual arch/x86 handlers.> I checked the 2.6.18 kernel and seems it have no such issue, because syscall entry in arch/x86_64/kernel/entry-xen.S will use kernel stack directly. (But vMCE injection may have issue still because it use zeroentry). > > BTW, Jeremy, seems vNMI support is not included in pvops dom0, will it be supported in future? >There''s been no call for it so far, so I hadn''t worried about it much. I was thinking it might be useful as a debug tool, but I don''t know what it gets used for normally. J _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>What are the requirements here? Are these events delivered to dom0 to >indicate that something needs attention on the machine, or are they >delivered synchronously to whatever domain is currently running to say >that something bad needs immediate attention?Both can happen (and also some more like "something happened somewhere, Just FYI"). They are all distingushed by different status bits. -Andi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-Dec-18 21:50 UTC
[Xen-devel] Re: One question to IST stack for PV guest
On 12/18/2009 01:43 PM, Kleen, Andi wrote:> > >> What are the requirements here? Are these events delivered to dom0 to >> indicate that something needs attention on the machine, or are they >> delivered synchronously to whatever domain is currently running to say >> that something bad needs immediate attention? >> > Both can happen (and also some more like "something happened somewhere, Just FYI"). > They are all distingushed by different status bits. >If they''re not synchronous, then why not use a normal virq event channel? J _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>> Both can happen (and also some more like "something happened >somewhere, Just FYI"). >> They are all distingushed by different status bits. >> > >If they''re not synchronous, then why not use a normal virq >event channel?As I wrote some classes of MCEs are "synchronous" -Andi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-Dec-18 22:26 UTC
[Xen-devel] Re: One question to IST stack for PV guest
On 12/18/2009 02:07 PM, Kleen, Andi wrote:> >>> Both can happen (and also some more like "something happened >>> >> somewhere, Just FYI"). >> >>> They are all distingushed by different status bits. >>> >>> >> If they''re not synchronous, then why not use a normal virq >> event channel? >> > As I wrote some classes of MCEs are "synchronous" >Are they things a guest domain can do anything useful with? Or should Xen just handle them internally (and then perhaps tell dom0 about it later if it makes sense)? J _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>Are they things a guest domain can do anything useful with?Yes they are.>Or should >Xen just handle them internally (and then perhaps tell dom0 about it >later if it makes sense)?When it''s corrupted memory and the memory is owned by a particular domain it has to know about it -- unless you want to kill it outright. But if you let it know about it can do better than just comitting suicide. The current mainline kernels don''t handle all situations yet, but will soonish (.34ish hopefully) -Andi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2009-Dec-19 09:24 UTC
Re: [Xen-devel] Re: One question to IST stack for PV guest
On Fri, 2009-12-18 at 21:21 +0000, Jeremy Fitzhardinge wrote:> > > BTW, Jeremy, seems vNMI support is not included in pvops dom0, will > it be supported in future? > > > > There''s been no call for it so far, so I hadn''t worried about it much. > I was thinking it might be useful as a debug tool, but I don''t know > what it gets used for normally.SysRQ-L (show all cpus) uses it via arch_trigger_all_cpu_backtrace() which is a bit of a problem even in a domU because it goes to apic->send_IPI_all(NMI_VECTOR) which ends up "BUG: unable to handle kernel paging request" in default_send_IPI_mask_logical. I started adding a new smp_op to handle allow this function to be overidden yesterday (WIP appended) but having some sort of NMI support would be useful so reduce the differences with native on the receiving end, instead of using smp_call_function. Ian. diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index 1e79678..00ef5f7 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -60,6 +60,8 @@ struct smp_ops { void (*send_call_func_ipi)(const struct cpumask *mask); void (*send_call_func_single_ipi)(int cpu); + + void (*send_nmi_ipi)(void); }; /* Globals due to paravirt */ @@ -126,6 +128,11 @@ static inline void arch_send_call_function_ipi_mask(const struct cpumask *mask) smp_ops.send_call_func_ipi(mask); } +static inline void smp_send_nmi_ipi(void) +{ + smp_ops.send_nmi_ipi(); +} + void cpu_disable_common(void); void native_smp_prepare_boot_cpu(void); void native_smp_prepare_cpus(unsigned int max_cpus); @@ -139,6 +146,8 @@ void play_dead_common(void); void native_send_call_func_ipi(const struct cpumask *mask); void native_send_call_func_single_ipi(int cpu); +void native_send_nmi_ipi(void); + void smp_store_cpu_info(int id); #define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu) diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c index 7ff61d6..40c1414 100644 --- a/arch/x86/kernel/apic/nmi.c +++ b/arch/x86/kernel/apic/nmi.c @@ -561,7 +561,7 @@ void arch_trigger_all_cpu_backtrace(void) cpumask_copy(&backtrace_mask, cpu_online_mask); printk(KERN_INFO "sending NMI to all CPUs:\n"); - apic->send_IPI_all(NMI_VECTOR); + smp_send_nmi_ipi(); /* Wait for up to 10 seconds for all CPUs to do the backtrace */ for (i = 0; i < 10 * 1000; i++) { diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c index ec1de97..f53437f 100644 --- a/arch/x86/kernel/smp.c +++ b/arch/x86/kernel/smp.c @@ -146,6 +146,11 @@ void native_send_call_func_ipi(const struct cpumask *mask) free_cpumask_var(allbutself); } +void native_send_nmi_ipi(void) +{ + apic->send_IPI_all(NMI_VECTOR); +} + /* * this function calls the ''stop'' function on all other CPUs in the system. */ @@ -236,5 +241,7 @@ struct smp_ops smp_ops = { .send_call_func_ipi = native_send_call_func_ipi, .send_call_func_single_ipi = native_send_call_func_single_ipi, + + .send_nmi_ipi = native_send_nmi_ipi, }; EXPORT_SYMBOL_GPL(smp_ops); diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c index 360f8d8..986f372 100644 --- a/arch/x86/xen/smp.c +++ b/arch/x86/xen/smp.c @@ -20,6 +20,7 @@ #include <asm/desc.h> #include <asm/pgtable.h> #include <asm/cpu.h> +#include <asm/nmi.h> #include <xen/interface/xen.h> #include <xen/interface/vcpu.h> @@ -456,6 +457,16 @@ static irqreturn_t xen_call_function_single_interrupt(int irq, void *dev_id) return IRQ_HANDLED; } +static void xen_nmi_ipi_func(void *info) +{ + nmi_watchdog_tick(task_pt_regs(current), 0/*reason*/); +} + +static void xen_send_nmi_ipi(void) +{ + smp_call_function(xen_nmi_ipi_func, NULL, 0); +} + static const struct smp_ops xen_smp_ops __initdata = { .smp_prepare_boot_cpu = xen_smp_prepare_boot_cpu, .smp_prepare_cpus = xen_smp_prepare_cpus, @@ -471,6 +482,8 @@ static const struct smp_ops xen_smp_ops __initdata = { .send_call_func_ipi = xen_smp_send_call_function_ipi, .send_call_func_single_ipi = xen_smp_send_call_function_single_ipi, + + .send_nmi_ipi = xen_send_nmi_ipi, }; void __init xen_smp_init(void) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jiang, Yunhong
2009-Dec-19 14:24 UTC
[Xen-devel] RE: One question to IST stack for PV guest
>-----Original Message----- >From: Jeremy Fitzhardinge [mailto:jeremy@goop.org] >Sent: Saturday, December 19, 2009 5:22 AM >To: Jiang, Yunhong >Cc: Keir Fraser; Jan Beulich; xen-devel@lists.xensource.com; Kleen, Andi >Subject: Re: One question to IST stack for PV guest > >On 12/18/2009 01:05 AM, Jiang, Yunhong wrote: >> Jeremy/Keir, I''m trying to add vMCA injection to pv_ops dom0. Because currently >we didn''t have virtual IST stack support, so I plan to use the kernel stack for vMCE. >But Andi told me that this method should have issue if MCE is injected before syscall >handler switches to kernel stack. After checking the code, seems this apply in pv_ops >dom0, since undo_xen_syscall will switch to user space stack firstly (see following >code). >> > >What are the requirements here? Are these events delivered to dom0 to >indicate that something needs attention on the machine, or are they >delivered synchronously to whatever domain is currently running to say >that something bad needs immediate attention?Whatever domain impacted, as Andi Kleen pointed out, and it can be a synchronous event, depends on the error type.>> I''m not sure if we really need to switch to user space stack, or we can simply place >user stack to oldrsp and don''t switch the stack at all, since xen hypervisor has use >the kernel stack already. >> >> Another option is to add vIST stack, but that requires changes for dom0/xen >interface and is a bit complex. >> > >What about making the call a bit like the failsafe callback, which >always uses the kernel stack, to deliver these exceptions? That could >reshape the kernel stack to conform to the normal stack frame and then >call the usual arch/x86 handlers.The issue comes from the syscall, not the vMCE/vNMI exception. The vMCE can be injected into guest at any time, that means, it may be injected when guest is in syscall''s entry point, but before the stack has been switched to kernel stack. Considering following situation: 1) A syscall happens from dom0 application to dom0 kernel (in 64 environment) 2) The syscall is trapped firstly by hypervisor, and it will creat bounce frame to re-inject the syscall to kernel. (please notice this frame will be kernel stack), and mark guest in kernel model. 3) In current dom0, the syscall entry (i.e. xen_syscall_target) will firstly undo_xen_syscall(), which will switch stack from kernel stack to user stack, later the system_call_after_swapgs() will switch the stack to kernel stack again. 4) A MCE happens in hardware before the . system_call_after_swapgs() , and hypervisor will be invoked. After hypervisor handle the MCE, it decide need to inject a virtual MCE to guest immediately. (As said, sometimes the vMCE should be synchronous injected). 5) Hypervisor check guest state and find it is in kernel mode, then it will use guest''s current stack to inject the vMCE . However, in fact, currently, the stack is user stack. That means the MCE handler in dom0 will use user stack. This will cause a lot of issue.> >> I checked the 2.6.18 kernel and seems it have no such issue, because syscall entry >in arch/x86_64/kernel/entry-xen.S will use kernel stack directly. (But vMCE injection >may have issue still because it use zeroentry). >> >> BTW, Jeremy, seems vNMI support is not included in pvops dom0, will it be >supported in future? >> > >There''s been no call for it so far, so I hadn''t worried about it much. >I was thinking it might be useful as a debug tool, but I don''t know what >it gets used for normally.I remember Jan stated that "Dom0 can get hardware generated NMIs, and any domain can get software injected ones", but I have not much background on it. (see http://lists.xensource.com/archives/html/xen-devel/2009-11/msg01203.html please). --jyh> > J_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jiang, Yunhong
2009-Dec-19 14:41 UTC
RE: [Xen-devel] Re: One question to IST stack for PV guest
Can SysRQ-L be used to check dead-locked CPU''s state? And if we have no NMI support, we may lost that part. --jyh>-----Original Message----- >From: Ian Campbell [mailto:Ian.Campbell@citrix.com] >Sent: Saturday, December 19, 2009 5:25 PM >To: Jeremy Fitzhardinge >Cc: Jiang, Yunhong; Kleen, Andi; xen-devel@lists.xensource.com; Keir Fraser; Jan >Beulich >Subject: Re: [Xen-devel] Re: One question to IST stack for PV guest > >On Fri, 2009-12-18 at 21:21 +0000, Jeremy Fitzhardinge wrote: >> >> > BTW, Jeremy, seems vNMI support is not included in pvops dom0, will >> it be supported in future? >> > >> >> There''s been no call for it so far, so I hadn''t worried about it much. >> I was thinking it might be useful as a debug tool, but I don''t know >> what it gets used for normally. > >SysRQ-L (show all cpus) uses it via arch_trigger_all_cpu_backtrace() >which is a bit of a problem even in a domU because it goes to >apic->send_IPI_all(NMI_VECTOR) which ends up "BUG: unable to handle >kernel paging request" in default_send_IPI_mask_logical. > >I started adding a new smp_op to handle allow this function to be >overidden yesterday (WIP appended) but having some sort of NMI support >would be useful so reduce the differences with native on the receiving >end, instead of using smp_call_function. > >Ian. > > >diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h >index 1e79678..00ef5f7 100644 >--- a/arch/x86/include/asm/smp.h >+++ b/arch/x86/include/asm/smp.h >@@ -60,6 +60,8 @@ struct smp_ops { > > void (*send_call_func_ipi)(const struct cpumask *mask); > void (*send_call_func_single_ipi)(int cpu); >+ >+ void (*send_nmi_ipi)(void); > }; > > /* Globals due to paravirt */ >@@ -126,6 +128,11 @@ static inline void arch_send_call_function_ipi_mask(const >struct cpumask *mask) > smp_ops.send_call_func_ipi(mask); > } > >+static inline void smp_send_nmi_ipi(void) >+{ >+ smp_ops.send_nmi_ipi(); >+} >+ > void cpu_disable_common(void); > void native_smp_prepare_boot_cpu(void); > void native_smp_prepare_cpus(unsigned int max_cpus); >@@ -139,6 +146,8 @@ void play_dead_common(void); > void native_send_call_func_ipi(const struct cpumask *mask); > void native_send_call_func_single_ipi(int cpu); > >+void native_send_nmi_ipi(void); >+ > void smp_store_cpu_info(int id); > #define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu) > >diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c >index 7ff61d6..40c1414 100644 >--- a/arch/x86/kernel/apic/nmi.c >+++ b/arch/x86/kernel/apic/nmi.c >@@ -561,7 +561,7 @@ void arch_trigger_all_cpu_backtrace(void) > cpumask_copy(&backtrace_mask, cpu_online_mask); > > printk(KERN_INFO "sending NMI to all CPUs:\n"); >- apic->send_IPI_all(NMI_VECTOR); >+ smp_send_nmi_ipi(); > > /* Wait for up to 10 seconds for all CPUs to do the backtrace */ > for (i = 0; i < 10 * 1000; i++) { >diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c >index ec1de97..f53437f 100644 >--- a/arch/x86/kernel/smp.c >+++ b/arch/x86/kernel/smp.c >@@ -146,6 +146,11 @@ void native_send_call_func_ipi(const struct cpumask >*mask) > free_cpumask_var(allbutself); > } > >+void native_send_nmi_ipi(void) >+{ >+ apic->send_IPI_all(NMI_VECTOR); >+} >+ > /* > * this function calls the ''stop'' function on all other CPUs in the system. > */ >@@ -236,5 +241,7 @@ struct smp_ops smp_ops = { > > .send_call_func_ipi = native_send_call_func_ipi, > .send_call_func_single_ipi = native_send_call_func_single_ipi, >+ >+ .send_nmi_ipi = native_send_nmi_ipi, > }; > EXPORT_SYMBOL_GPL(smp_ops); >diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c >index 360f8d8..986f372 100644 >--- a/arch/x86/xen/smp.c >+++ b/arch/x86/xen/smp.c >@@ -20,6 +20,7 @@ > #include <asm/desc.h> > #include <asm/pgtable.h> > #include <asm/cpu.h> >+#include <asm/nmi.h> > > #include <xen/interface/xen.h> > #include <xen/interface/vcpu.h> >@@ -456,6 +457,16 @@ static irqreturn_t xen_call_function_single_interrupt(int >irq, void *dev_id) > return IRQ_HANDLED; > } > >+static void xen_nmi_ipi_func(void *info) >+{ >+ nmi_watchdog_tick(task_pt_regs(current), 0/*reason*/); >+} >+ >+static void xen_send_nmi_ipi(void) >+{ >+ smp_call_function(xen_nmi_ipi_func, NULL, 0); >+} >+ > static const struct smp_ops xen_smp_ops __initdata = { > .smp_prepare_boot_cpu = xen_smp_prepare_boot_cpu, > .smp_prepare_cpus = xen_smp_prepare_cpus, >@@ -471,6 +482,8 @@ static const struct smp_ops xen_smp_ops __initdata = { > > .send_call_func_ipi = xen_smp_send_call_function_ipi, > .send_call_func_single_ipi = xen_smp_send_call_function_single_ipi, >+ >+ .send_nmi_ipi = xen_send_nmi_ipi, > }; > > void __init xen_smp_init(void) >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2009-Dec-23 11:42 UTC
RE: [Xen-devel] Re: One question to IST stack for PV guest
On Sat, 2009-12-19 at 14:41 +0000, Jiang, Yunhong wrote:> Can SysRQ-L be used to check dead-locked CPU''s state?As I understand it yes, it can be used to debug CPUs hung with interrupts disabled.> And if we have no NMI support, we may lost that part.I think so. It''s still useful for other classes of hang (i.e. those where interrupts are enabled). Ian.> > --jyh > > >-----Original Message----- > >From: Ian Campbell [mailto:Ian.Campbell@citrix.com] > >Sent: Saturday, December 19, 2009 5:25 PM > >To: Jeremy Fitzhardinge > >Cc: Jiang, Yunhong; Kleen, Andi; xen-devel@lists.xensource.com; Keir Fraser; Jan > >Beulich > >Subject: Re: [Xen-devel] Re: One question to IST stack for PV guest > > > >On Fri, 2009-12-18 at 21:21 +0000, Jeremy Fitzhardinge wrote: > >> > >> > BTW, Jeremy, seems vNMI support is not included in pvops dom0, will > >> it be supported in future? > >> > > >> > >> There''s been no call for it so far, so I hadn''t worried about it much. > >> I was thinking it might be useful as a debug tool, but I don''t know > >> what it gets used for normally. > > > >SysRQ-L (show all cpus) uses it via arch_trigger_all_cpu_backtrace() > >which is a bit of a problem even in a domU because it goes to > >apic->send_IPI_all(NMI_VECTOR) which ends up "BUG: unable to handle > >kernel paging request" in default_send_IPI_mask_logical. > > > >I started adding a new smp_op to handle allow this function to be > >overidden yesterday (WIP appended) but having some sort of NMI support > >would be useful so reduce the differences with native on the receiving > >end, instead of using smp_call_function. > > > >Ian. > > > > > >diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h > >index 1e79678..00ef5f7 100644 > >--- a/arch/x86/include/asm/smp.h > >+++ b/arch/x86/include/asm/smp.h > >@@ -60,6 +60,8 @@ struct smp_ops { > > > > void (*send_call_func_ipi)(const struct cpumask *mask); > > void (*send_call_func_single_ipi)(int cpu); > >+ > >+ void (*send_nmi_ipi)(void); > > }; > > > > /* Globals due to paravirt */ > >@@ -126,6 +128,11 @@ static inline void arch_send_call_function_ipi_mask(const > >struct cpumask *mask) > > smp_ops.send_call_func_ipi(mask); > > } > > > >+static inline void smp_send_nmi_ipi(void) > >+{ > >+ smp_ops.send_nmi_ipi(); > >+} > >+ > > void cpu_disable_common(void); > > void native_smp_prepare_boot_cpu(void); > > void native_smp_prepare_cpus(unsigned int max_cpus); > >@@ -139,6 +146,8 @@ void play_dead_common(void); > > void native_send_call_func_ipi(const struct cpumask *mask); > > void native_send_call_func_single_ipi(int cpu); > > > >+void native_send_nmi_ipi(void); > >+ > > void smp_store_cpu_info(int id); > > #define cpu_physical_id(cpu) per_cpu(x86_cpu_to_apicid, cpu) > > > >diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c > >index 7ff61d6..40c1414 100644 > >--- a/arch/x86/kernel/apic/nmi.c > >+++ b/arch/x86/kernel/apic/nmi.c > >@@ -561,7 +561,7 @@ void arch_trigger_all_cpu_backtrace(void) > > cpumask_copy(&backtrace_mask, cpu_online_mask); > > > > printk(KERN_INFO "sending NMI to all CPUs:\n"); > >- apic->send_IPI_all(NMI_VECTOR); > >+ smp_send_nmi_ipi(); > > > > /* Wait for up to 10 seconds for all CPUs to do the backtrace */ > > for (i = 0; i < 10 * 1000; i++) { > >diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c > >index ec1de97..f53437f 100644 > >--- a/arch/x86/kernel/smp.c > >+++ b/arch/x86/kernel/smp.c > >@@ -146,6 +146,11 @@ void native_send_call_func_ipi(const struct cpumask > >*mask) > > free_cpumask_var(allbutself); > > } > > > >+void native_send_nmi_ipi(void) > >+{ > >+ apic->send_IPI_all(NMI_VECTOR); > >+} > >+ > > /* > > * this function calls the ''stop'' function on all other CPUs in the system. > > */ > >@@ -236,5 +241,7 @@ struct smp_ops smp_ops = { > > > > .send_call_func_ipi = native_send_call_func_ipi, > > .send_call_func_single_ipi = native_send_call_func_single_ipi, > >+ > >+ .send_nmi_ipi = native_send_nmi_ipi, > > }; > > EXPORT_SYMBOL_GPL(smp_ops); > >diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c > >index 360f8d8..986f372 100644 > >--- a/arch/x86/xen/smp.c > >+++ b/arch/x86/xen/smp.c > >@@ -20,6 +20,7 @@ > > #include <asm/desc.h> > > #include <asm/pgtable.h> > > #include <asm/cpu.h> > >+#include <asm/nmi.h> > > > > #include <xen/interface/xen.h> > > #include <xen/interface/vcpu.h> > >@@ -456,6 +457,16 @@ static irqreturn_t xen_call_function_single_interrupt(int > >irq, void *dev_id) > > return IRQ_HANDLED; > > } > > > >+static void xen_nmi_ipi_func(void *info) > >+{ > >+ nmi_watchdog_tick(task_pt_regs(current), 0/*reason*/); > >+} > >+ > >+static void xen_send_nmi_ipi(void) > >+{ > >+ smp_call_function(xen_nmi_ipi_func, NULL, 0); > >+} > >+ > > static const struct smp_ops xen_smp_ops __initdata = { > > .smp_prepare_boot_cpu = xen_smp_prepare_boot_cpu, > > .smp_prepare_cpus = xen_smp_prepare_cpus, > >@@ -471,6 +482,8 @@ static const struct smp_ops xen_smp_ops __initdata = { > > > > .send_call_func_ipi = xen_smp_send_call_function_ipi, > > .send_call_func_single_ipi = xen_smp_send_call_function_single_ipi, > >+ > >+ .send_nmi_ipi = xen_send_nmi_ipi, > > }; > > > > void __init xen_smp_init(void) > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel