This patch series can be applied on top of Ian''s patch series "arm: host SMP support" The first patch fixes an issue on SMP board, when smp_call_function is implemented. The second patch implements the function. Julien Grall (2): xen/arm: save/restore irq flags in gic_disable_cpu xen/arm: implement smp_call_function xen/arch/arm/gic.c | 9 ++++-- xen/arch/arm/smp.c | 73 ++++++++++++++++++++++++++++++++++++++++++++- xen/include/asm-arm/gic.h | 1 + xen/include/asm-arm/smp.h | 2 ++ 4 files changed, 82 insertions(+), 3 deletions(-) -- 1.7.10.4
Julien Grall
2013-Apr-15 16:50 UTC
[PATCH 1/2] xen/arm: save/restore irq flags in gic_disable_cpu
On SMP board, gic_disable_cpu is called by an SGI call function when Xen is halting. In this specific case, the interrupts are disabled. Signed-off-by: Julien Grall <julien.grall@citrix.com> --- xen/arch/arm/gic.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index aa179a9..c9f64f1 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -419,10 +419,12 @@ void __cpuinit gic_init_secondary_cpu(void) /* Shut down the per-CPU GIC interface */ void gic_disable_cpu(void) { - spin_lock_irq(&gic.lock); + unsigned long flags; + + spin_lock_irqsave(&gic.lock, flags); gic_cpu_disable(); gic_hyp_disable(); - spin_unlock_irq(&gic.lock); + spin_unlock_irqrestore(&gic.lock, flags); } void gic_route_ppis(void) -- 1.7.10.4
Signed-off-by: Julien Grall <julien.grall@citrix.com> --- xen/arch/arm/gic.c | 3 ++ xen/arch/arm/smp.c | 73 ++++++++++++++++++++++++++++++++++++++++++++- xen/include/asm-arm/gic.h | 1 + xen/include/asm-arm/smp.h | 2 ++ 4 files changed, 78 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index c9f64f1..9fab92b 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -658,6 +658,9 @@ static void do_sgi(struct cpu_user_regs *regs, int othercpu, enum gic_sgi sgi) case GIC_SGI_DUMP_STATE: dump_execstate(regs); break; + case GIC_SGI_CALL_FUNCTION: + smp_call_function_interrupt(); + break; default: panic("Unhandled SGI %d on CPU%d\n", sgi, smp_processor_id()); break; diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c index a902d84..a6d2476 100644 --- a/xen/arch/arm/smp.c +++ b/xen/arch/arm/smp.c @@ -4,6 +4,8 @@ #include <asm/cpregs.h> #include <asm/page.h> #include <asm/gic.h> +#include <xen/spinlock.h> +#include <xen/smp.h> void flush_tlb_mask(const cpumask_t *mask) { @@ -16,7 +18,76 @@ void smp_call_function( void *info, int wait) { - panic("%s not implmented\n", __func__); + cpumask_t allbutself; + + cpumask_andnot(&allbutself, &cpu_online_map, + cpumask_of(smp_processor_id())); + on_selected_cpus(&allbutself, func, info, wait); +} + +/* + * Structure and data for smp_call_function()/on_selected_cpus(). + */ +static DEFINE_SPINLOCK(call_lock); +static struct call_data_struct { + void (*func) (void *info); + void *info; + int wait; + cpumask_t selected; +} call_data; + +void on_selected_cpus( + const cpumask_t *selected, + void (*func) (void *info), + void *info, + int wait) +{ + unsigned int nr_cpus; + + ASSERT(local_irq_is_enabled()); + + spin_lock(&call_lock); + + cpumask_copy(&call_data.selected, selected); + + nr_cpus = cpumask_weight(&call_data.selected); + if ( nr_cpus == 0 ) + goto out; + + call_data.func = func; + call_data.info = info; + call_data.wait = wait; + + send_SGI_mask(&call_data.selected, GIC_SGI_CALL_FUNCTION); + + while ( !cpumask_empty(&call_data.selected) ) + cpu_relax(); + +out: + spin_unlock(&call_lock); +} + +void smp_call_function_interrupt(void) +{ + void (*func)(void *info) = call_data.func; + void *info = call_data.info; + unsigned int cpu = smp_processor_id(); + + if ( !cpumask_test_cpu(cpu, &call_data.selected) ) + return; + + if ( call_data.wait ) + { + (*func)(info); + mb(); + cpumask_clear_cpu(cpu, &call_data.selected); + } + else + { + mb(); + cpumask_clear_cpu(cpu, &call_data.selected); + (*func)(info); + } } void smp_send_event_check_mask(const cpumask_t *mask) diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h index 4f2c8b8..47354dd 100644 --- a/xen/include/asm-arm/gic.h +++ b/xen/include/asm-arm/gic.h @@ -173,6 +173,7 @@ extern void gic_restore_state(struct vcpu *v); enum gic_sgi { GIC_SGI_EVENT_CHECK = 0, GIC_SGI_DUMP_STATE = 1, + GIC_SGI_CALL_FUNCTION = 2, }; extern void send_SGI_mask(const cpumask_t *cpumask, enum gic_sgi sgi); extern void send_SGI_one(unsigned int cpu, enum gic_sgi sgi); diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h index 1c2746b..866c2f7 100644 --- a/xen/include/asm-arm/smp.h +++ b/xen/include/asm-arm/smp.h @@ -24,6 +24,8 @@ make_cpus_ready(unsigned int max_cpus, unsigned long boot_phys_offset); extern void smp_clear_cpu_maps (void); extern int smp_get_max_cpus (void); +extern void smp_call_function_interrupt(void); + #endif /* * Local variables: -- 1.7.10.4
Ian Campbell
2013-Apr-16 09:12 UTC
Re: [PATCH 1/2] xen/arm: save/restore irq flags in gic_disable_cpu
On Mon, 2013-04-15 at 17:50 +0100, Julien Grall wrote:> On SMP board, gic_disable_cpu is called by an SGI call function when Xen is > halting. In this specific case, the interrupts are disabled.In the only other existing caller (__cpu_disable) interrupts are also disabled and I''d argue that it would be invalid to call this function with interrupts enabled due to its nature. I think it should start with an assert to that effect. In principal I think that would make spin_lock() OK to use rather irq or irqsave? But the lockdep analyser (and/or reality) may not agree? Or is the issue here not so much that IRQs are disabled but that this new caller is in interrupt context?> Signed-off-by: Julien Grall <julien.grall@citrix.com> > --- > xen/arch/arm/gic.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index aa179a9..c9f64f1 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -419,10 +419,12 @@ void __cpuinit gic_init_secondary_cpu(void) > /* Shut down the per-CPU GIC interface */ > void gic_disable_cpu(void) > { > - spin_lock_irq(&gic.lock); > + unsigned long flags; > + > + spin_lock_irqsave(&gic.lock, flags); > gic_cpu_disable();So we apparently have gic_cpu_disable and gic_disable_cpu, nice! Since the only caller of gic_cpu_disable is gic_disable_cpu I think we should just inline it. Or call it gic_(cpu_)disable_gicc> gic_hyp_disable();Likewise this is the only caller of this function, either inline or gic_(cpu)disable_gich?> - spin_unlock_irq(&gic.lock); > + spin_unlock_irqrestore(&gic.lock, flags); > } > > void gic_route_ppis(void)
On Mon, 2013-04-15 at 17:50 +0100, Julien Grall wrote:> Signed-off-by: Julien Grall <julien.grall@citrix.com>This code is mostly copied from x86, perhaps we can just make it common code? If we are going to duplicate it then can we at least make on_selected_cpus behave semantically the same. The x86 version handles masks which include the local processor explicitly, since interrupts are disabled I don''t think we can rely on taking the interrupt locally, so as it stands the code will deadlock if you pass in the current cpu? If we fix that (with an explicit call on x86) then we need to watch out for the local CPU taking the interrupt when we reenable them, which is outside the locking etc...> --- > xen/arch/arm/gic.c | 3 ++ > xen/arch/arm/smp.c | 73 ++++++++++++++++++++++++++++++++++++++++++++- > xen/include/asm-arm/gic.h | 1 + > xen/include/asm-arm/smp.h | 2 ++ > 4 files changed, 78 insertions(+), 1 deletion(-) > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index c9f64f1..9fab92b 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -658,6 +658,9 @@ static void do_sgi(struct cpu_user_regs *regs, int othercpu, enum gic_sgi sgi) > case GIC_SGI_DUMP_STATE: > dump_execstate(regs); > break; > + case GIC_SGI_CALL_FUNCTION: > + smp_call_function_interrupt(); > + break; > default: > panic("Unhandled SGI %d on CPU%d\n", sgi, smp_processor_id()); > break; > diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c > index a902d84..a6d2476 100644 > --- a/xen/arch/arm/smp.c > +++ b/xen/arch/arm/smp.c > @@ -4,6 +4,8 @@ > #include <asm/cpregs.h> > #include <asm/page.h> > #include <asm/gic.h> > +#include <xen/spinlock.h> > +#include <xen/smp.h> > > void flush_tlb_mask(const cpumask_t *mask) > { > @@ -16,7 +18,76 @@ void smp_call_function( > void *info, > int wait) > { > - panic("%s not implmented\n", __func__); > + cpumask_t allbutself; > + > + cpumask_andnot(&allbutself, &cpu_online_map, > + cpumask_of(smp_processor_id())); > + on_selected_cpus(&allbutself, func, info, wait); > +} > + > +/* > + * Structure and data for smp_call_function()/on_selected_cpus(). > + */ > +static DEFINE_SPINLOCK(call_lock); > +static struct call_data_struct { > + void (*func) (void *info); > + void *info; > + int wait; > + cpumask_t selected; > +} call_data; > + > +void on_selected_cpus( > + const cpumask_t *selected, > + void (*func) (void *info), > + void *info, > + int wait) > +{ > + unsigned int nr_cpus; > + > + ASSERT(local_irq_is_enabled()); > + > + spin_lock(&call_lock); > + > + cpumask_copy(&call_data.selected, selected); > + > + nr_cpus = cpumask_weight(&call_data.selected); > + if ( nr_cpus == 0 ) > + goto out; > + > + call_data.func = func; > + call_data.info = info; > + call_data.wait = wait; > + > + send_SGI_mask(&call_data.selected, GIC_SGI_CALL_FUNCTION); > + > + while ( !cpumask_empty(&call_data.selected) ) > + cpu_relax(); > + > +out: > + spin_unlock(&call_lock); > +} > + > +void smp_call_function_interrupt(void) > +{ > + void (*func)(void *info) = call_data.func; > + void *info = call_data.info; > + unsigned int cpu = smp_processor_id(); > + > + if ( !cpumask_test_cpu(cpu, &call_data.selected) ) > + return; > + > + if ( call_data.wait ) > + { > + (*func)(info); > + mb(); > + cpumask_clear_cpu(cpu, &call_data.selected); > + } > + else > + { > + mb(); > + cpumask_clear_cpu(cpu, &call_data.selected); > + (*func)(info); > + } > } > > void smp_send_event_check_mask(const cpumask_t *mask) > diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h > index 4f2c8b8..47354dd 100644 > --- a/xen/include/asm-arm/gic.h > +++ b/xen/include/asm-arm/gic.h > @@ -173,6 +173,7 @@ extern void gic_restore_state(struct vcpu *v); > enum gic_sgi { > GIC_SGI_EVENT_CHECK = 0, > GIC_SGI_DUMP_STATE = 1, > + GIC_SGI_CALL_FUNCTION = 2, > }; > extern void send_SGI_mask(const cpumask_t *cpumask, enum gic_sgi sgi); > extern void send_SGI_one(unsigned int cpu, enum gic_sgi sgi); > diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h > index 1c2746b..866c2f7 100644 > --- a/xen/include/asm-arm/smp.h > +++ b/xen/include/asm-arm/smp.h > @@ -24,6 +24,8 @@ make_cpus_ready(unsigned int max_cpus, unsigned long boot_phys_offset); > > extern void smp_clear_cpu_maps (void); > extern int smp_get_max_cpus (void); > +extern void smp_call_function_interrupt(void); > + > #endif > /* > * Local variables:
Julien Grall
2013-Apr-16 10:39 UTC
Re: [PATCH 1/2] xen/arm: save/restore irq flags in gic_disable_cpu
On 04/16/2013 10:12 AM, Ian Campbell wrote:> On Mon, 2013-04-15 at 17:50 +0100, Julien Grall wrote: >> On SMP board, gic_disable_cpu is called by an SGI call function when Xen is >> halting. In this specific case, the interrupts are disabled. > > In the only other existing caller (__cpu_disable) interrupts are also > disabled and I''d argue that it would be invalid to call this function > with interrupts enabled due to its nature. I think it should start with > an assert to that effect.Indeed, I didn''t pay attention that interrupts are disabled in __cpu_disable. I will use spin_lock and add an assert.>> Signed-off-by: Julien Grall <julien.grall@citrix.com> >> --- >> xen/arch/arm/gic.c | 6 ++++-- >> 1 file changed, 4 insertions(+), 2 deletions(-) >> >> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c >> index aa179a9..c9f64f1 100644 >> --- a/xen/arch/arm/gic.c >> +++ b/xen/arch/arm/gic.c >> @@ -419,10 +419,12 @@ void __cpuinit gic_init_secondary_cpu(void) >> /* Shut down the per-CPU GIC interface */ >> void gic_disable_cpu(void) >> { >> - spin_lock_irq(&gic.lock); >> + unsigned long flags; >> + >> + spin_lock_irqsave(&gic.lock, flags); >> gic_cpu_disable(); > > So we apparently have gic_cpu_disable and gic_disable_cpu, nice! > > Since the only caller of gic_cpu_disable is gic_disable_cpu I think we > should just inline it. Or call it gic_(cpu_)disable_gicc > >> gic_hyp_disable(); > > Likewise this is the only caller of this function, either inline or > gic_(cpu)disable_gich?If we modify the name, we also need to modify gic_{hyp,cpu,dist}_init. I would prefer to let in state or rename all the functions. It''s clearer to have separate functions (init and disable) for each part of the gic. Julien
On 04/16/2013 10:24 AM, Ian Campbell wrote:> On Mon, 2013-04-15 at 17:50 +0100, Julien Grall wrote: >> Signed-off-by: Julien Grall <julien.grall@citrix.com> > > This code is mostly copied from x86, perhaps we can just make it common > code?Indeed.> If we are going to duplicate it then can we at least make > on_selected_cpus behave semantically the same. The x86 version handles > masks which include the local processor explicitly, since interrupts are > disabled I don''t think we can rely on taking the interrupt locally, so > as it stands the code will deadlock if you pass in the current cpu?At the beginning of on_selected_cpus, we check that the interrupts are enabled. Even, when the SGI is sent interrupts are not disabled. So the code shouldn''t deadlock.> If we fix that (with an explicit call on x86) then we need to watch out > for the local CPU taking the interrupt when we reenable them, which is > outside the locking etc... > >> --- >> xen/arch/arm/gic.c | 3 ++ >> xen/arch/arm/smp.c | 73 ++++++++++++++++++++++++++++++++++++++++++++- >> xen/include/asm-arm/gic.h | 1 + >> xen/include/asm-arm/smp.h | 2 ++ >> 4 files changed, 78 insertions(+), 1 deletion(-) >> >> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c >> index c9f64f1..9fab92b 100644 >> --- a/xen/arch/arm/gic.c >> +++ b/xen/arch/arm/gic.c >> @@ -658,6 +658,9 @@ static void do_sgi(struct cpu_user_regs *regs, int othercpu, enum gic_sgi sgi) >> case GIC_SGI_DUMP_STATE: >> dump_execstate(regs); >> break; >> + case GIC_SGI_CALL_FUNCTION: >> + smp_call_function_interrupt(); >> + break; >> default: >> panic("Unhandled SGI %d on CPU%d\n", sgi, smp_processor_id()); >> break; >> diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c >> index a902d84..a6d2476 100644 >> --- a/xen/arch/arm/smp.c >> +++ b/xen/arch/arm/smp.c >> @@ -4,6 +4,8 @@ >> #include <asm/cpregs.h> >> #include <asm/page.h> >> #include <asm/gic.h> >> +#include <xen/spinlock.h> >> +#include <xen/smp.h> >> >> void flush_tlb_mask(const cpumask_t *mask) >> { >> @@ -16,7 +18,76 @@ void smp_call_function( >> void *info, >> int wait) >> { >> - panic("%s not implmented\n", __func__); >> + cpumask_t allbutself; >> + >> + cpumask_andnot(&allbutself, &cpu_online_map, >> + cpumask_of(smp_processor_id())); >> + on_selected_cpus(&allbutself, func, info, wait); >> +} >> + >> +/* >> + * Structure and data for smp_call_function()/on_selected_cpus(). >> + */ >> +static DEFINE_SPINLOCK(call_lock); >> +static struct call_data_struct { >> + void (*func) (void *info); >> + void *info; >> + int wait; >> + cpumask_t selected; >> +} call_data; >> + >> +void on_selected_cpus( >> + const cpumask_t *selected, >> + void (*func) (void *info), >> + void *info, >> + int wait) >> +{ >> + unsigned int nr_cpus; >> + >> + ASSERT(local_irq_is_enabled()); >> + >> + spin_lock(&call_lock); >> + >> + cpumask_copy(&call_data.selected, selected); >> + >> + nr_cpus = cpumask_weight(&call_data.selected); >> + if ( nr_cpus == 0 ) >> + goto out; >> + >> + call_data.func = func; >> + call_data.info = info; >> + call_data.wait = wait; >> + >> + send_SGI_mask(&call_data.selected, GIC_SGI_CALL_FUNCTION); >> + >> + while ( !cpumask_empty(&call_data.selected) ) >> + cpu_relax(); >> + >> +out: >> + spin_unlock(&call_lock); >> +} >> + >> +void smp_call_function_interrupt(void) >> +{ >> + void (*func)(void *info) = call_data.func; >> + void *info = call_data.info; >> + unsigned int cpu = smp_processor_id(); >> + >> + if ( !cpumask_test_cpu(cpu, &call_data.selected) ) >> + return; >> + >> + if ( call_data.wait ) >> + { >> + (*func)(info); >> + mb(); >> + cpumask_clear_cpu(cpu, &call_data.selected); >> + } >> + else >> + { >> + mb(); >> + cpumask_clear_cpu(cpu, &call_data.selected); >> + (*func)(info); >> + } >> } >> >> void smp_send_event_check_mask(const cpumask_t *mask) >> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h >> index 4f2c8b8..47354dd 100644 >> --- a/xen/include/asm-arm/gic.h >> +++ b/xen/include/asm-arm/gic.h >> @@ -173,6 +173,7 @@ extern void gic_restore_state(struct vcpu *v); >> enum gic_sgi { >> GIC_SGI_EVENT_CHECK = 0, >> GIC_SGI_DUMP_STATE = 1, >> + GIC_SGI_CALL_FUNCTION = 2, >> }; >> extern void send_SGI_mask(const cpumask_t *cpumask, enum gic_sgi sgi); >> extern void send_SGI_one(unsigned int cpu, enum gic_sgi sgi); >> diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h >> index 1c2746b..866c2f7 100644 >> --- a/xen/include/asm-arm/smp.h >> +++ b/xen/include/asm-arm/smp.h >> @@ -24,6 +24,8 @@ make_cpus_ready(unsigned int max_cpus, unsigned long boot_phys_offset); >> >> extern void smp_clear_cpu_maps (void); >> extern int smp_get_max_cpus (void); >> +extern void smp_call_function_interrupt(void); >> + >> #endif >> /* >> * Local variables: > >
Ian Campbell
2013-Apr-16 11:02 UTC
Re: [PATCH 1/2] xen/arm: save/restore irq flags in gic_disable_cpu
On Tue, 2013-04-16 at 11:39 +0100, Julien Grall wrote:> On 04/16/2013 10:12 AM, Ian Campbell wrote: > > > On Mon, 2013-04-15 at 17:50 +0100, Julien Grall wrote: > >> On SMP board, gic_disable_cpu is called by an SGI call function when Xen is > >> halting. In this specific case, the interrupts are disabled. > > > > In the only other existing caller (__cpu_disable) interrupts are also > > disabled and I''d argue that it would be invalid to call this function > > with interrupts enabled due to its nature. I think it should start with > > an assert to that effect. > > Indeed, I didn''t pay attention that interrupts are disabled in > __cpu_disable. I will use spin_lock and add an assert.Thanks.> >> Signed-off-by: Julien Grall <julien.grall@citrix.com> > >> --- > >> xen/arch/arm/gic.c | 6 ++++-- > >> 1 file changed, 4 insertions(+), 2 deletions(-) > >> > >> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > >> index aa179a9..c9f64f1 100644 > >> --- a/xen/arch/arm/gic.c > >> +++ b/xen/arch/arm/gic.c > >> @@ -419,10 +419,12 @@ void __cpuinit gic_init_secondary_cpu(void) > >> /* Shut down the per-CPU GIC interface */ > >> void gic_disable_cpu(void) > >> { > >> - spin_lock_irq(&gic.lock); > >> + unsigned long flags; > >> + > >> + spin_lock_irqsave(&gic.lock, flags); > >> gic_cpu_disable(); > > > > So we apparently have gic_cpu_disable and gic_disable_cpu, nice! > > > > Since the only caller of gic_cpu_disable is gic_disable_cpu I think we > > should just inline it. Or call it gic_(cpu_)disable_gicc > > > >> gic_hyp_disable(); > > > > Likewise this is the only caller of this function, either inline or > > gic_(cpu)disable_gich? > > > If we modify the name, we also need to modify gic_{hyp,cpu,dist}_init. > I would prefer to let in state or rename all the functions. It''s clearer > to have separate functions (init and disable) for each part of the gic.Agreed, but only if the names are actually clear, which at least for the disable case they aren''t really. I suppose at least one of them is static, which mitigates a bit... Ian
On Tue, 2013-04-16 at 11:54 +0100, Julien Grall wrote:> On 04/16/2013 10:24 AM, Ian Campbell wrote: > > > On Mon, 2013-04-15 at 17:50 +0100, Julien Grall wrote: > >> Signed-off-by: Julien Grall <julien.grall@citrix.com> > > > > This code is mostly copied from x86, perhaps we can just make it common > > code? > > > Indeed. > > > If we are going to duplicate it then can we at least make > > on_selected_cpus behave semantically the same. The x86 version handles > > masks which include the local processor explicitly, since interrupts are > > disabled I don''t think we can rely on taking the interrupt locally, so > > as it stands the code will deadlock if you pass in the current cpu? > > > At the beginning of on_selected_cpus, we check that the interrupts are > enabled. Even, when the SGI is sent interrupts are not disabled. So > the code shouldn''t deadlock.Ooops, right, I read the ASSERT wrongly and/or had the previous patch still in my head... Ian.