A detail description follows in the commit message of each patch. Stefano Stabellini (2): xen: introduce vcpu_block xen/arm: trap guest WFI xen/arch/arm/gic.c | 8 +++++++- xen/arch/arm/traps.c | 17 ++++++++++++++++- xen/arch/arm/vgic.c | 1 + xen/common/schedule.c | 13 ++++++++----- xen/include/asm-arm/event.h | 41 ++++++++++++++++++++++++++++++++--------- xen/include/asm-arm/gic.h | 1 + xen/include/xen/sched.h | 1 + 7 files changed, 66 insertions(+), 16 deletions(-) -- 1.7.2.5
Rename do_block to vcpu_block. Move the call to local_event_delivery_enable out of vcpu_block, to a new static function called vcpu_block_enable_events. Use vcpu_block_enable_events instead of do_block throughout in schedule.c Changes in v9: - return void from vcpu_block and vcpu_block_enable_events; - make vcpu_block_enable_events static. Changes in v8: - rename do_block to vcpu_block; - move local_event_delivery_enable out of vcpu_block to vcpu_block_enable_events. Changes in v7: - introduce a event_delivery_enable parameter to make the call to local_event_delivery_enable conditional; - export do_block as is. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- xen/common/schedule.c | 13 ++++++++----- xen/include/xen/sched.h | 1 + 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index c1cd3d0..e526602 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -677,11 +677,10 @@ int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity) } /* Block the currently-executing domain until a pertinent event occurs. */ -static long do_block(void) +void vcpu_block(void) { struct vcpu *v = current; - local_event_delivery_enable(); set_bit(_VPF_blocked, &v->pause_flags); /* Check for events /after/ blocking: avoids wakeup waiting race. */ @@ -694,8 +693,12 @@ static long do_block(void) TRACE_2D(TRC_SCHED_BLOCK, v->domain->domain_id, v->vcpu_id); raise_softirq(SCHEDULE_SOFTIRQ); } +} - return 0; +static void vcpu_block_enable_events(void) +{ + local_event_delivery_enable(); + vcpu_block(); } static long do_poll(struct sched_poll *sched_poll) @@ -870,7 +873,7 @@ long do_sched_op_compat(int cmd, unsigned long arg) case SCHEDOP_block: { - ret = do_block(); + vcpu_block_enable_events(); break; } @@ -907,7 +910,7 @@ ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) case SCHEDOP_block: { - ret = do_block(); + vcpu_block_enable_events(); break; } diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index ad971d2..beadc42 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -690,6 +690,7 @@ static inline int vcpu_runnable(struct vcpu *v) atomic_read(&v->domain->pause_count)); } +void vcpu_block(void); void vcpu_unblock(struct vcpu *v); void vcpu_pause(struct vcpu *v); void vcpu_pause_nosync(struct vcpu *v); -- 1.7.2.5
Trap guest WFI, block the guest VCPU unless it has pending interrupts (WFI should return if any interrupts arrive even if interrupts are disabled). Awake the guest vcpu when a new interrupt for it arrives. Introduce gic_events_need_delivery: it checks whether the current vcpu has any interrupts that need to be delivered either on the lrs or in lr_pending. Properly implement local_events_need_delivery: check if the guest disabled interrupts, if they aren''t disabled, return positive if gic_events_need_delivery returns positive. Otherwise we still need to check whether evtchn_upcall_pending is set but no VGIC_IRQ_EVTCHN_CALLBACK irqs are in flight: it could be the race described by commit db453468d92369e7182663fb13e14d83ec4ce456 "arm: vgic: fix race between evtchn upcall and evtchnop_send". If that is the case it means that an event needs to be injected. If all these tests are negative then no events need to be delivered. Implement local_event_delivery_enable by clearing PSR_IRQ_MASK. Changes in v9: - rename _local_events_need_delivery to local_events_need_delivery_nomask; - remove evtchn_upcall_mask checks and comments. Changes in v8: - remove the mask check from _local_events_need_delivery, add an unconditional mask check in local_events_need_delivery. Changes in v7: - clear PSR_IRQ_MASK in the implementation of local_event_delivery_enable. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- xen/arch/arm/gic.c | 8 +++++++- xen/arch/arm/traps.c | 17 ++++++++++++++++- xen/arch/arm/vgic.c | 1 + xen/include/asm-arm/event.h | 41 ++++++++++++++++++++++++++++++++--------- xen/include/asm-arm/gic.h | 1 + 5 files changed, 57 insertions(+), 11 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 0d1ab5a..5e83c50 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -589,13 +589,19 @@ static void gic_inject_irq_stop(void) } } +int gic_events_need_delivery(void) +{ + return (!list_empty(¤t->arch.vgic.lr_pending) || + this_cpu(lr_mask)); +} + void gic_inject(void) { if ( vcpu_info(current, evtchn_upcall_pending) ) vgic_vcpu_inject_irq(current, VGIC_IRQ_EVTCHN_CALLBACK, 1); gic_restore_pending_irqs(current); - if (!this_cpu(lr_mask)) + if (!gic_events_need_delivery()) gic_inject_irq_stop(); else gic_inject_irq_start(); diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index b7487b7..da53675 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -29,7 +29,9 @@ #include <xen/hypercall.h> #include <xen/softirq.h> #include <xen/domain_page.h> +#include <public/sched.h> #include <public/xen.h> +#include <asm/event.h> #include <asm/regs.h> #include <asm/cpregs.h> @@ -59,7 +61,7 @@ void __cpuinit init_traps(void) WRITE_SYSREG((vaddr_t)hyp_traps_vector, VBAR_EL2); /* Setup hypervisor traps */ - WRITE_SYSREG(HCR_PTW|HCR_BSU_OUTER|HCR_AMO|HCR_IMO|HCR_VM, HCR_EL2); + WRITE_SYSREG(HCR_PTW|HCR_BSU_OUTER|HCR_AMO|HCR_IMO|HCR_VM|HCR_TWI, HCR_EL2); isb(); } @@ -931,6 +933,19 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) }; switch (hsr.ec) { + case HSR_EC_WFI_WFE: + /* at the moment we only trap WFI */ + vcpu_block(); + /* The ARM spec declares that even if local irqs are masked in + * the CPSR register, an irq should wake up a cpu from WFI anyway. + * For this reason we need to check for irqs that need delivery, + * ignoring the CPSR register, *after* calling SCHEDOP_block to + * avoid races with vgic_vcpu_inject_irq. + */ + if ( local_events_need_delivery_nomask() ) + vcpu_unblock(current); + regs->pc += hsr.len ? 4 : 2; + break; case HSR_EC_CP15_32: if ( ! is_pv32_domain(current->domain) ) goto bad_trap; diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 4d8da02..b30da78 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -618,6 +618,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual) out: spin_unlock_irqrestore(&v->arch.vgic.lock, flags); /* we have a new higher priority irq, inject it into the guest */ + vcpu_unblock(v); } /* diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h index 10f58af..276fef5 100644 --- a/xen/include/asm-arm/event.h +++ b/xen/include/asm-arm/event.h @@ -1,27 +1,50 @@ #ifndef __ASM_EVENT_H__ #define __ASM_EVENT_H__ +#include <asm/gic.h> +#include <asm/domain.h> + void vcpu_kick(struct vcpu *v); void vcpu_mark_events_pending(struct vcpu *v); +static inline int local_events_need_delivery_nomask(void) +{ + struct pending_irq *p = irq_to_pending(current, VGIC_IRQ_EVTCHN_CALLBACK); + + /* XXX: if the first interrupt has already been delivered, we should + * check whether any higher priority interrupts are in the + * lr_pending queue or in the LR registers and return 1 only in that + * case. + * In practice the guest interrupt handler should run with + * interrupts disabled so this shouldn''t be a problem in the general + * case. + */ + if ( gic_events_need_delivery() ) + return 1; + + if ( vcpu_info(current, evtchn_upcall_pending) && + list_empty(&p->inflight) ) + return 1; + + return 0; +} + static inline int local_events_need_delivery(void) { - /* TODO - * return (vcpu_info(v, evtchn_upcall_pending) && - !vcpu_info(v, evtchn_upcall_mask)); */ + struct cpu_user_regs *regs = guest_cpu_user_regs(); + + /* guest IRQs are masked */ + if ( (regs->cpsr & PSR_IRQ_MASK) ) return 0; + return local_events_need_delivery_nomask(); } int local_event_delivery_is_enabled(void); -static inline void local_event_delivery_disable(void) -{ - /* TODO current->vcpu_info->evtchn_upcall_mask = 1; */ -} - static inline void local_event_delivery_enable(void) { - /* TODO current->vcpu_info->evtchn_upcall_mask = 0; */ + struct cpu_user_regs *regs = guest_cpu_user_regs(); + regs->cpsr &= ~PSR_IRQ_MASK; } /* No arch specific virq definition now. Default to global. */ diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h index 24c0d5c..92711d5 100644 --- a/xen/include/asm-arm/gic.h +++ b/xen/include/asm-arm/gic.h @@ -143,6 +143,7 @@ extern void gic_route_ppis(void); extern void gic_route_spis(void); extern void gic_inject(void); +extern int gic_events_need_delivery(void); extern void __cpuinit init_maintenance_interrupt(void); extern void gic_set_guest_irq(struct vcpu *v, unsigned int irq, -- 1.7.2.5
ping? On Tue, 23 Apr 2013, Stefano Stabellini wrote:> A detail description follows in the commit message of each patch. > > Stefano Stabellini (2): > xen: introduce vcpu_block > xen/arm: trap guest WFI > > xen/arch/arm/gic.c | 8 +++++++- > xen/arch/arm/traps.c | 17 ++++++++++++++++- > xen/arch/arm/vgic.c | 1 + > xen/common/schedule.c | 13 ++++++++----- > xen/include/asm-arm/event.h | 41 ++++++++++++++++++++++++++++++++--------- > xen/include/asm-arm/gic.h | 1 + > xen/include/xen/sched.h | 1 + > 7 files changed, 66 insertions(+), 16 deletions(-) > > -- > 1.7.2.5 >
On Sun, 2013-04-28 at 15:32 +0100, Stefano Stabellini wrote:> ping?You should CC Keir and Jan since their input is required on the first patch.> > On Tue, 23 Apr 2013, Stefano Stabellini wrote: > > A detail description follows in the commit message of each patch. > > > > Stefano Stabellini (2): > > xen: introduce vcpu_block > > xen/arm: trap guest WFI > > > > xen/arch/arm/gic.c | 8 +++++++- > > xen/arch/arm/traps.c | 17 ++++++++++++++++- > > xen/arch/arm/vgic.c | 1 + > > xen/common/schedule.c | 13 ++++++++----- > > xen/include/asm-arm/event.h | 41 ++++++++++++++++++++++++++++++++--------- > > xen/include/asm-arm/gic.h | 1 + > > xen/include/xen/sched.h | 1 + > > 7 files changed, 66 insertions(+), 16 deletions(-) > > > > -- > > 1.7.2.5 > >
On Tue, 2013-04-23 at 12:19 +0100, Stefano Stabellini wrote:> diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h > index 10f58af..276fef5 100644 > --- a/xen/include/asm-arm/event.h > +++ b/xen/include/asm-arm/event.h > @@ -1,27 +1,50 @@ > #ifndef __ASM_EVENT_H__ > #define __ASM_EVENT_H__ > > +#include <asm/gic.h> > +#include <asm/domain.h> > + > void vcpu_kick(struct vcpu *v); > void vcpu_mark_events_pending(struct vcpu *v); > > +static inline int local_events_need_delivery_nomask(void) > +{ > + struct pending_irq *p = irq_to_pending(current, VGIC_IRQ_EVTCHN_CALLBACK); > + > + /* XXX: if the first interrupt has already been delivered, we should > + * check whether any higher priority interrupts are in the > + * lr_pending queue or in the LR registers and return 1 only in that > + * case. > + * In practice the guest interrupt handler should run with > + * interrupts disabled so this shouldn''t be a problem in the general > + * case.I think in practice this would be the problem of the code injecting the actual upcall, rather than the code here which determines if such an upcall is required? Perhaps this comment needs to be in whichever function handles the actual injection of interrupts? Also rather than requiring the guest interrupt handler to run with interrupts disabled I think the code as it stands simply requires that the guest not mind that the upcall doesn''t get preempted by a higher priority interrupt. IOW the upcall will complete and then the higher priority interrupt will be injected. To the guest this just looks like the interrupt arrived a bit later, so while it is something to fix eventually its not really a big problem until people start wanting to do more real-timeish stuff? Ian.
forgot to CC the Xen maintainers on this one On Tue, 23 Apr 2013, Stefano Stabellini wrote:> Rename do_block to vcpu_block. > > Move the call to local_event_delivery_enable out of vcpu_block, to a new > static function called vcpu_block_enable_events. > > Use vcpu_block_enable_events instead of do_block throughout in > schedule.c > > > Changes in v9: > - return void from vcpu_block and vcpu_block_enable_events; > - make vcpu_block_enable_events static. > > Changes in v8: > - rename do_block to vcpu_block; > - move local_event_delivery_enable out of vcpu_block to > vcpu_block_enable_events. > > Changes in v7: > - introduce a event_delivery_enable parameter to make the call to > local_event_delivery_enable conditional; > - export do_block as is. > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > --- > xen/common/schedule.c | 13 ++++++++----- > xen/include/xen/sched.h | 1 + > 2 files changed, 9 insertions(+), 5 deletions(-) > > diff --git a/xen/common/schedule.c b/xen/common/schedule.c > index c1cd3d0..e526602 100644 > --- a/xen/common/schedule.c > +++ b/xen/common/schedule.c > @@ -677,11 +677,10 @@ int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity) > } > > /* Block the currently-executing domain until a pertinent event occurs. */ > -static long do_block(void) > +void vcpu_block(void) > { > struct vcpu *v = current; > > - local_event_delivery_enable(); > set_bit(_VPF_blocked, &v->pause_flags); > > /* Check for events /after/ blocking: avoids wakeup waiting race. */ > @@ -694,8 +693,12 @@ static long do_block(void) > TRACE_2D(TRC_SCHED_BLOCK, v->domain->domain_id, v->vcpu_id); > raise_softirq(SCHEDULE_SOFTIRQ); > } > +} > > - return 0; > +static void vcpu_block_enable_events(void) > +{ > + local_event_delivery_enable(); > + vcpu_block(); > } > > static long do_poll(struct sched_poll *sched_poll) > @@ -870,7 +873,7 @@ long do_sched_op_compat(int cmd, unsigned long arg) > > case SCHEDOP_block: > { > - ret = do_block(); > + vcpu_block_enable_events(); > break; > } > > @@ -907,7 +910,7 @@ ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) > > case SCHEDOP_block: > { > - ret = do_block(); > + vcpu_block_enable_events(); > break; > } > > diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h > index ad971d2..beadc42 100644 > --- a/xen/include/xen/sched.h > +++ b/xen/include/xen/sched.h > @@ -690,6 +690,7 @@ static inline int vcpu_runnable(struct vcpu *v) > atomic_read(&v->domain->pause_count)); > } > > +void vcpu_block(void); > void vcpu_unblock(struct vcpu *v); > void vcpu_pause(struct vcpu *v); > void vcpu_pause_nosync(struct vcpu *v); > -- > 1.7.2.5 >
On 29/04/2013 11:15, "Stefano Stabellini" <stefano.stabellini@eu.citrix.com> wrote:> forgot to CC the Xen maintainers on this oneAcked-by: Keir Fraser <keir@xen.org>> On Tue, 23 Apr 2013, Stefano Stabellini wrote: >> Rename do_block to vcpu_block. >> >> Move the call to local_event_delivery_enable out of vcpu_block, to a new >> static function called vcpu_block_enable_events. >> >> Use vcpu_block_enable_events instead of do_block throughout in >> schedule.c >> >> >> Changes in v9: >> - return void from vcpu_block and vcpu_block_enable_events; >> - make vcpu_block_enable_events static. >> >> Changes in v8: >> - rename do_block to vcpu_block; >> - move local_event_delivery_enable out of vcpu_block to >> vcpu_block_enable_events. >> >> Changes in v7: >> - introduce a event_delivery_enable parameter to make the call to >> local_event_delivery_enable conditional; >> - export do_block as is. >> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> >> --- >> xen/common/schedule.c | 13 ++++++++----- >> xen/include/xen/sched.h | 1 + >> 2 files changed, 9 insertions(+), 5 deletions(-) >> >> diff --git a/xen/common/schedule.c b/xen/common/schedule.c >> index c1cd3d0..e526602 100644 >> --- a/xen/common/schedule.c >> +++ b/xen/common/schedule.c >> @@ -677,11 +677,10 @@ int vcpu_set_affinity(struct vcpu *v, const cpumask_t >> *affinity) >> } >> >> /* Block the currently-executing domain until a pertinent event occurs. */ >> -static long do_block(void) >> +void vcpu_block(void) >> { >> struct vcpu *v = current; >> >> - local_event_delivery_enable(); >> set_bit(_VPF_blocked, &v->pause_flags); >> >> /* Check for events /after/ blocking: avoids wakeup waiting race. */ >> @@ -694,8 +693,12 @@ static long do_block(void) >> TRACE_2D(TRC_SCHED_BLOCK, v->domain->domain_id, v->vcpu_id); >> raise_softirq(SCHEDULE_SOFTIRQ); >> } >> +} >> >> - return 0; >> +static void vcpu_block_enable_events(void) >> +{ >> + local_event_delivery_enable(); >> + vcpu_block(); >> } >> >> static long do_poll(struct sched_poll *sched_poll) >> @@ -870,7 +873,7 @@ long do_sched_op_compat(int cmd, unsigned long arg) >> >> case SCHEDOP_block: >> { >> - ret = do_block(); >> + vcpu_block_enable_events(); >> break; >> } >> >> @@ -907,7 +910,7 @@ ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) >> arg) >> >> case SCHEDOP_block: >> { >> - ret = do_block(); >> + vcpu_block_enable_events(); >> break; >> } >> >> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h >> index ad971d2..beadc42 100644 >> --- a/xen/include/xen/sched.h >> +++ b/xen/include/xen/sched.h >> @@ -690,6 +690,7 @@ static inline int vcpu_runnable(struct vcpu *v) >> atomic_read(&v->domain->pause_count)); >> } >> >> +void vcpu_block(void); >> void vcpu_unblock(struct vcpu *v); >> void vcpu_pause(struct vcpu *v); >> void vcpu_pause_nosync(struct vcpu *v); >> -- >> 1.7.2.5 >>
On Mon, 29 Apr 2013, Ian Campbell wrote:> On Tue, 2013-04-23 at 12:19 +0100, Stefano Stabellini wrote: > > diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h > > index 10f58af..276fef5 100644 > > --- a/xen/include/asm-arm/event.h > > +++ b/xen/include/asm-arm/event.h > > @@ -1,27 +1,50 @@ > > #ifndef __ASM_EVENT_H__ > > #define __ASM_EVENT_H__ > > > > +#include <asm/gic.h> > > +#include <asm/domain.h> > > + > > void vcpu_kick(struct vcpu *v); > > void vcpu_mark_events_pending(struct vcpu *v); > > > > +static inline int local_events_need_delivery_nomask(void) > > +{ > > + struct pending_irq *p = irq_to_pending(current, VGIC_IRQ_EVTCHN_CALLBACK); > > + > > + /* XXX: if the first interrupt has already been delivered, we should > > + * check whether any higher priority interrupts are in the > > + * lr_pending queue or in the LR registers and return 1 only in that > > + * case. > > + * In practice the guest interrupt handler should run with > > + * interrupts disabled so this shouldn''t be a problem in the general > > + * case. > > I think in practice this would be the problem of the code injecting the > actual upcall, rather than the code here which determines if such an > upcall is required? Perhaps this comment needs to be in whichever > function handles the actual injection of interrupts? > > Also rather than requiring the guest interrupt handler to run with > interrupts disabled I think the code as it stands simply requires that > the guest not mind that the upcall doesn''t get preempted by a higher > priority interrupt. IOW the upcall will complete and then the higher > priority interrupt will be injected. To the guest this just looks like > the interrupt arrived a bit later, so while it is something to fix > eventually its not really a big problem until people start wanting to do > more real-timeish stuff?Keep in mind that local_events_need_delivery is called by common code to make all sort of decisions (including hypercall_preempt_check): it is important that local_events_need_delivery* returns an accurate answer. If an interrupt has already been injected and it is currently inflight (no EOI yet), local_events_need_delivery_nomask shouldn''t return positive unless an higher interrupt is waiting to be delivered. Doing something different would be a mistake and can cause hypercalls to be unnecessarily preempted, the vcpu to be woken up more often than necessary, SCHEDOP_poll to return a wrong result. That''s why I think that the comment should to stay in this function.
On Mon, 2013-04-29 at 19:09 +0100, Stefano Stabellini wrote:> On Mon, 29 Apr 2013, Ian Campbell wrote: > > On Tue, 2013-04-23 at 12:19 +0100, Stefano Stabellini wrote: > > > diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h > > > index 10f58af..276fef5 100644 > > > --- a/xen/include/asm-arm/event.h > > > +++ b/xen/include/asm-arm/event.h > > > @@ -1,27 +1,50 @@ > > > #ifndef __ASM_EVENT_H__ > > > #define __ASM_EVENT_H__ > > > > > > +#include <asm/gic.h> > > > +#include <asm/domain.h> > > > + > > > void vcpu_kick(struct vcpu *v); > > > void vcpu_mark_events_pending(struct vcpu *v); > > > > > > +static inline int local_events_need_delivery_nomask(void) > > > +{ > > > + struct pending_irq *p = irq_to_pending(current, VGIC_IRQ_EVTCHN_CALLBACK); > > > + > > > + /* XXX: if the first interrupt has already been delivered, we should > > > + * check whether any higher priority interrupts are in the > > > + * lr_pending queue or in the LR registers and return 1 only in that > > > + * case. > > > + * In practice the guest interrupt handler should run with > > > + * interrupts disabled so this shouldn''t be a problem in the general > > > + * case. > > > > I think in practice this would be the problem of the code injecting the > > actual upcall, rather than the code here which determines if such an > > upcall is required? Perhaps this comment needs to be in whichever > > function handles the actual injection of interrupts? > > > > Also rather than requiring the guest interrupt handler to run with > > interrupts disabled I think the code as it stands simply requires that > > the guest not mind that the upcall doesn''t get preempted by a higher > > priority interrupt. IOW the upcall will complete and then the higher > > priority interrupt will be injected. To the guest this just looks like > > the interrupt arrived a bit later, so while it is something to fix > > eventually its not really a big problem until people start wanting to do > > more real-timeish stuff? > > Keep in mind that local_events_need_delivery is called by common code to > make all sort of decisions (including hypercall_preempt_check): it is > important that local_events_need_delivery* returns an accurate answer. > If an interrupt has already been injected and it is currently inflight > (no EOI yet), local_events_need_delivery_nomask shouldn''t return > positive unless an higher interrupt is waiting to be delivered. Doing > something different would be a mistake and can cause hypercalls to be > unnecessarily preempted, the vcpu to be woken up more often than > necessary, SCHEDOP_poll to return a wrong result. > That''s why I think that the comment should to stay in this function.This description makes me more concerned about the "should run with interrupts disabled" requirement than I was previously. How hard would it be to actually implement the behaviour the comment describes? Checking the lr_pending queue for higher priorities seems trivial since you can just look at the head of the list, not sure about checking the LRs -- is that the lr_inflight queue? Or do you need to check the pending/active bits for every one? BTW, "any higher priority" -- higher than what? Higher than the EVTCHN_CALLBACK IRQ priority or the current IAR/PMR value for the GICC? Ian.
On Mon, 2013-04-29 at 11:29 +0100, Keir Fraser wrote:> On 29/04/2013 11:15, "Stefano Stabellini" <stefano.stabellini@eu.citrix.com> > wrote: > > > forgot to CC the Xen maintainers on this one > > Acked-by: Keir Fraser <keir@xen.org>Thanks, I have applied.
On Tue, 30 Apr 2013, Ian Campbell wrote:> On Mon, 2013-04-29 at 19:09 +0100, Stefano Stabellini wrote: > > On Mon, 29 Apr 2013, Ian Campbell wrote: > > > On Tue, 2013-04-23 at 12:19 +0100, Stefano Stabellini wrote: > > > > diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h > > > > index 10f58af..276fef5 100644 > > > > --- a/xen/include/asm-arm/event.h > > > > +++ b/xen/include/asm-arm/event.h > > > > @@ -1,27 +1,50 @@ > > > > #ifndef __ASM_EVENT_H__ > > > > #define __ASM_EVENT_H__ > > > > > > > > +#include <asm/gic.h> > > > > +#include <asm/domain.h> > > > > + > > > > void vcpu_kick(struct vcpu *v); > > > > void vcpu_mark_events_pending(struct vcpu *v); > > > > > > > > +static inline int local_events_need_delivery_nomask(void) > > > > +{ > > > > + struct pending_irq *p = irq_to_pending(current, VGIC_IRQ_EVTCHN_CALLBACK); > > > > + > > > > + /* XXX: if the first interrupt has already been delivered, we should > > > > + * check whether any higher priority interrupts are in the > > > > + * lr_pending queue or in the LR registers and return 1 only in that > > > > + * case. > > > > + * In practice the guest interrupt handler should run with > > > > + * interrupts disabled so this shouldn''t be a problem in the general > > > > + * case. > > > > > > I think in practice this would be the problem of the code injecting the > > > actual upcall, rather than the code here which determines if such an > > > upcall is required? Perhaps this comment needs to be in whichever > > > function handles the actual injection of interrupts? > > > > > > Also rather than requiring the guest interrupt handler to run with > > > interrupts disabled I think the code as it stands simply requires that > > > the guest not mind that the upcall doesn''t get preempted by a higher > > > priority interrupt. IOW the upcall will complete and then the higher > > > priority interrupt will be injected. To the guest this just looks like > > > the interrupt arrived a bit later, so while it is something to fix > > > eventually its not really a big problem until people start wanting to do > > > more real-timeish stuff? > > > > Keep in mind that local_events_need_delivery is called by common code to > > make all sort of decisions (including hypercall_preempt_check): it is > > important that local_events_need_delivery* returns an accurate answer. > > If an interrupt has already been injected and it is currently inflight > > (no EOI yet), local_events_need_delivery_nomask shouldn''t return > > positive unless an higher interrupt is waiting to be delivered. Doing > > something different would be a mistake and can cause hypercalls to be > > unnecessarily preempted, the vcpu to be woken up more often than > > necessary, SCHEDOP_poll to return a wrong result. > > That''s why I think that the comment should to stay in this function. > > This description makes me more concerned about the "should run with > interrupts disabled" requirement than I was previously.I wouldn''t worry too much about it: the guest we support today, Linux, always runs the interrupt handlers with interrupts disabled. If it suddenly changed behaviour it wouldn''t stop working but our vgic "emulation" wouldn''t be as accurate as it should be. In practice we have a vgic that doesn''t support priorities, but neither does Linux today.> How hard would > it be to actually implement the behaviour the comment describes? > Checking the lr_pending queue for higher priorities seems trivial since > you can just look at the head of the list, not sure about checking the > LRs -- is that the lr_inflight queue? Or do you need to check the > pending/active bits for every one?The latter (pending/active bit in every one), that''s why it is difficult.> BTW, "any higher priority" -- higher than what? Higher than the > EVTCHN_CALLBACK IRQ priority or the current IAR/PMR value for the GICC?Higher than the current IAR.
On Tue, 2013-04-30 at 18:48 +0100, Stefano Stabellini wrote:> On Tue, 30 Apr 2013, Ian Campbell wrote: > > On Mon, 2013-04-29 at 19:09 +0100, Stefano Stabellini wrote: > > > On Mon, 29 Apr 2013, Ian Campbell wrote: > > > > On Tue, 2013-04-23 at 12:19 +0100, Stefano Stabellini wrote: > > > > > diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h > > > > > index 10f58af..276fef5 100644 > > > > > --- a/xen/include/asm-arm/event.h > > > > > +++ b/xen/include/asm-arm/event.h > > > > > @@ -1,27 +1,50 @@ > > > > > #ifndef __ASM_EVENT_H__ > > > > > #define __ASM_EVENT_H__ > > > > > > > > > > +#include <asm/gic.h> > > > > > +#include <asm/domain.h> > > > > > + > > > > > void vcpu_kick(struct vcpu *v); > > > > > void vcpu_mark_events_pending(struct vcpu *v); > > > > > > > > > > +static inline int local_events_need_delivery_nomask(void) > > > > > +{ > > > > > + struct pending_irq *p = irq_to_pending(current, VGIC_IRQ_EVTCHN_CALLBACK); > > > > > + > > > > > + /* XXX: if the first interrupt has already been delivered, we should > > > > > + * check whether any higher priority interrupts are in the > > > > > + * lr_pending queue or in the LR registers and return 1 only in that > > > > > + * case. > > > > > + * In practice the guest interrupt handler should run with > > > > > + * interrupts disabled so this shouldn''t be a problem in the general > > > > > + * case. > > > > > > > > I think in practice this would be the problem of the code injecting the > > > > actual upcall, rather than the code here which determines if such an > > > > upcall is required? Perhaps this comment needs to be in whichever > > > > function handles the actual injection of interrupts? > > > > > > > > Also rather than requiring the guest interrupt handler to run with > > > > interrupts disabled I think the code as it stands simply requires that > > > > the guest not mind that the upcall doesn''t get preempted by a higher > > > > priority interrupt. IOW the upcall will complete and then the higher > > > > priority interrupt will be injected. To the guest this just looks like > > > > the interrupt arrived a bit later, so while it is something to fix > > > > eventually its not really a big problem until people start wanting to do > > > > more real-timeish stuff? > > > > > > Keep in mind that local_events_need_delivery is called by common code to > > > make all sort of decisions (including hypercall_preempt_check): it is > > > important that local_events_need_delivery* returns an accurate answer. > > > If an interrupt has already been injected and it is currently inflight > > > (no EOI yet), local_events_need_delivery_nomask shouldn''t return > > > positive unless an higher interrupt is waiting to be delivered. Doing > > > something different would be a mistake and can cause hypercalls to be > > > unnecessarily preempted, the vcpu to be woken up more often than > > > necessary, SCHEDOP_poll to return a wrong result. > > > That''s why I think that the comment should to stay in this function. > > > > This description makes me more concerned about the "should run with > > interrupts disabled" requirement than I was previously. > > I wouldn''t worry too much about it: the guest we support today, Linux, > always runs the interrupt handlers with interrupts disabled. > If it suddenly changed behaviour it wouldn''t stop working but our vgic > "emulation" wouldn''t be as accurate as it should be. In practice we have > a vgic that doesn''t support priorities, but neither does Linux today.I worry quite a bit that these sorts of shortcomings will become our supported ABI if we are not careful. This needs to go on the blocker list for declaring the ABI stable IMHO. I''ve added an entry to http://wiki.xen.org/wiki/Xen_ARM_TODO> > How hard would > > it be to actually implement the behaviour the comment describes? > > Checking the lr_pending queue for higher priorities seems trivial since > > you can just look at the head of the list, not sure about checking the > > LRs -- is that the lr_inflight queue? Or do you need to check the > > pending/active bits for every one? > > The latter (pending/active bit in every one), that''s why it is difficult.Right.> > > BTW, "any higher priority" -- higher than what? Higher than the > > EVTCHN_CALLBACK IRQ priority or the current IAR/PMR value for the GICC? > > Higher than the current IAR.Good, can you mention that?
On Wed, 1 May 2013, Ian Campbell wrote:> > > BTW, "any higher priority" -- higher than what? Higher than the > > > EVTCHN_CALLBACK IRQ priority or the current IAR/PMR value for the GICC? > > > > Higher than the current IAR. > > Good, can you mention that?OK