This patch tries to make sure the virtio interrupt handler for INTX won't be called after a reset and before virtio_device_ready(). We can't use IRQF_NO_AUTOEN since we're using shared interrupt (IRQF_SHARED). So this patch tracks the INTX enabling status in a new intx_soft_enabled variable and toggle it during in vp_disable/enable_vectors(). The INTX interrupt handler will check intx_soft_enabled before processing the actual interrupt. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/virtio/virtio_pci_common.c | 18 ++++++++++++++++-- drivers/virtio/virtio_pci_common.h | 1 + 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c index 0b9523e6dd39..835197151dc1 100644 --- a/drivers/virtio/virtio_pci_common.c +++ b/drivers/virtio/virtio_pci_common.c @@ -30,8 +30,12 @@ void vp_disable_vectors(struct virtio_device *vdev) struct virtio_pci_device *vp_dev = to_vp_device(vdev); int i; - if (vp_dev->intx_enabled) + if (vp_dev->intx_enabled) { + vp_dev->intx_soft_enabled = false; + /* ensure the vp_interrupt see this intx_soft_enabled value */ + smp_wmb(); synchronize_irq(vp_dev->pci_dev->irq); + } for (i = 0; i < vp_dev->msix_vectors; ++i) disable_irq(pci_irq_vector(vp_dev->pci_dev, i)); @@ -43,8 +47,12 @@ void vp_enable_vectors(struct virtio_device *vdev) struct virtio_pci_device *vp_dev = to_vp_device(vdev); int i; - if (vp_dev->intx_enabled) + if (vp_dev->intx_enabled) { + vp_dev->intx_soft_enabled = true; + /* ensure the vp_interrupt see this intx_soft_enabled value */ + smp_wmb(); return; + } for (i = 0; i < vp_dev->msix_vectors; ++i) enable_irq(pci_irq_vector(vp_dev->pci_dev, i)); @@ -97,6 +105,12 @@ static irqreturn_t vp_interrupt(int irq, void *opaque) struct virtio_pci_device *vp_dev = opaque; u8 isr; + if (!vp_dev->intx_soft_enabled) + return IRQ_NONE; + + /* read intx_soft_enabled before read others */ + smp_rmb(); + /* reading the ISR has the effect of also clearing it so it's very * important to save off the value. */ isr = ioread8(vp_dev->isr); diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h index a235ce9ff6a5..3c06e0f92ee4 100644 --- a/drivers/virtio/virtio_pci_common.h +++ b/drivers/virtio/virtio_pci_common.h @@ -64,6 +64,7 @@ struct virtio_pci_device { /* MSI-X support */ int msix_enabled; int intx_enabled; + bool intx_soft_enabled; cpumask_var_t *msix_affinity_masks; /* Name strings for interrupts. This size should be enough, * and I'm too lazy to allocate each name separately. */ -- 2.25.1
On Mon, Sep 13, 2021 at 01:53:51PM +0800, Jason Wang wrote:> This patch tries to make sure the virtio interrupt handler for INTX > won't be called after a reset and before virtio_device_ready(). We > can't use IRQF_NO_AUTOEN since we're using shared interrupt > (IRQF_SHARED). So this patch tracks the INTX enabling status in a new > intx_soft_enabled variable and toggle it during in > vp_disable/enable_vectors(). The INTX interrupt handler will check > intx_soft_enabled before processing the actual interrupt. > > Signed-off-by: Jason Wang <jasowang at redhat.com>Not all that excited about all the memory barriers for something that should be an extremely rare event (for most kernels - literally once per boot). Can't we do better?> --- > drivers/virtio/virtio_pci_common.c | 18 ++++++++++++++++-- > drivers/virtio/virtio_pci_common.h | 1 + > 2 files changed, 17 insertions(+), 2 deletions(-) > > diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c > index 0b9523e6dd39..835197151dc1 100644 > --- a/drivers/virtio/virtio_pci_common.c > +++ b/drivers/virtio/virtio_pci_common.c > @@ -30,8 +30,12 @@ void vp_disable_vectors(struct virtio_device *vdev) > struct virtio_pci_device *vp_dev = to_vp_device(vdev); > int i; > > - if (vp_dev->intx_enabled) > + if (vp_dev->intx_enabled) { > + vp_dev->intx_soft_enabled = false; > + /* ensure the vp_interrupt see this intx_soft_enabled value */ > + smp_wmb(); > synchronize_irq(vp_dev->pci_dev->irq); > + } > > for (i = 0; i < vp_dev->msix_vectors; ++i) > disable_irq(pci_irq_vector(vp_dev->pci_dev, i)); > @@ -43,8 +47,12 @@ void vp_enable_vectors(struct virtio_device *vdev) > struct virtio_pci_device *vp_dev = to_vp_device(vdev); > int i; > > - if (vp_dev->intx_enabled) > + if (vp_dev->intx_enabled) { > + vp_dev->intx_soft_enabled = true; > + /* ensure the vp_interrupt see this intx_soft_enabled value */ > + smp_wmb(); > return; > + } > > for (i = 0; i < vp_dev->msix_vectors; ++i) > enable_irq(pci_irq_vector(vp_dev->pci_dev, i)); > @@ -97,6 +105,12 @@ static irqreturn_t vp_interrupt(int irq, void *opaque) > struct virtio_pci_device *vp_dev = opaque; > u8 isr; > > + if (!vp_dev->intx_soft_enabled) > + return IRQ_NONE; > + > + /* read intx_soft_enabled before read others */ > + smp_rmb(); > + > /* reading the ISR has the effect of also clearing it so it's very > * important to save off the value. */ > isr = ioread8(vp_dev->isr); > diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h > index a235ce9ff6a5..3c06e0f92ee4 100644 > --- a/drivers/virtio/virtio_pci_common.h > +++ b/drivers/virtio/virtio_pci_common.h > @@ -64,6 +64,7 @@ struct virtio_pci_device { > /* MSI-X support */ > int msix_enabled; > int intx_enabled; > + bool intx_soft_enabled; > cpumask_var_t *msix_affinity_masks; > /* Name strings for interrupts. This size should be enough, > * and I'm too lazy to allocate each name separately. */ > -- > 2.25.1
Jason, On Mon, Sep 13 2021 at 13:53, Jason Wang wrote:> This patch tries to make sure the virtio interrupt handler for INTX > won't be called after a reset and before virtio_device_ready(). We > can't use IRQF_NO_AUTOEN since we're using shared interrupt > (IRQF_SHARED). So this patch tracks the INTX enabling status in a new > intx_soft_enabled variable and toggle it during in > vp_disable/enable_vectors(). The INTX interrupt handler will check > intx_soft_enabled before processing the actual interrupt.Ah, there it is :) Cc'ed our memory ordering wizards as I might be wrong as usual.> - if (vp_dev->intx_enabled) > + if (vp_dev->intx_enabled) { > + vp_dev->intx_soft_enabled = false; > + /* ensure the vp_interrupt see this intx_soft_enabled value */ > + smp_wmb(); > synchronize_irq(vp_dev->pci_dev->irq);As you are synchronizing the interrupt here anyway, what is the value of the barrier? vp_dev->intx_soft_enabled = false; synchronize_irq(vp_dev->pci_dev->irq); is sufficient because of: synchronize_irq() do { raw_spin_lock(desc->lock); in_progress = check_inprogress(desc); raw_spin_unlock(desc->lock); } while (in_progress); raw_spin_lock() has ACQUIRE semantics so the store to intx_soft_enabled can complete after lock has been acquired which is uninteresting. raw_spin_unlock() has RELEASE semantics so the store to intx_soft_enabled has to be completed before the unlock completes. So if the interrupt is on the flight then it might or might not see intx_soft_enabled == false. But that's true for your barrier construct as well. The important part is that any interrupt for this line arriving after synchronize_irq() has completed is guaranteed to see intx_soft_enabled == false. That is what you want to achieve, right?> for (i = 0; i < vp_dev->msix_vectors; ++i) > disable_irq(pci_irq_vector(vp_dev->pci_dev, i)); > @@ -43,8 +47,12 @@ void vp_enable_vectors(struct virtio_device *vdev) > struct virtio_pci_device *vp_dev = to_vp_device(vdev); > int i; > > - if (vp_dev->intx_enabled) > + if (vp_dev->intx_enabled) { > + vp_dev->intx_soft_enabled = true; > + /* ensure the vp_interrupt see this intx_soft_enabled value */ > + smp_wmb();For the enable case the barrier is pointless vs. intx_soft_enabled CPU 0 CPU 1 interrupt vp_enable_vectors() vp_interrupt() if (!vp_dev->intx_soft_enabled) return IRQ_NONE; vp_dev->intx_soft_enabled = true; IOW, the concurrent interrupt might or might not see the store. That's not a problem for legacy PCI interrupts. If it did not see the store and the interrupt originated from that device then it will account it as one spurious interrupt which will get raised again because those interrupts are level triggered and nothing acknowledged it at the device level. Now, what's more interesting is that is has to be guaranteed that the interrupt which observes vp_dev->intx_soft_enabled == true also observes all preceeding stores, i.e. those which make the interrupt handler capable of handling the interrupt. That's the real problem and for that your barrier is at the wrong place because you want to make sure that those stores are visible before the store to intx_soft_enabled becomes visible, i.e. this should be: /* Ensure that all preceeding stores are visible before intx_soft_enabled */ smp_wmb(); vp_dev->intx_soft_enabled = true; Now Micheal is not really enthusiatic about the barrier in the interrupt handler hotpath, which is understandable. As the device startup is not really happening often it's sensible to do the following disable_irq(); vp_dev->intx_soft_enabled = true; enable_irq(); because: disable_irq() synchronize_irq() acts as a barrier for the preceeding stores: disable_irq() raw_spin_lock(desc->lock); __disable_irq(desc); raw_spin_unlock(desc->lock); synchronize_irq() do { raw_spin_lock(desc->lock); in_progress = check_inprogress(desc); raw_spin_unlock(desc->lock); } while (in_progress); intx_soft_enabled = true; enable_irq(); In this case synchronize_irq() prevents the subsequent store to intx_soft_enabled to leak into the __disable_irq(desc) section which in turn makes it impossible for an interrupt handler to observe intx_soft_enabled == true before the prerequisites which preceed the call to disable_irq() are visible. Of course the memory ordering wizards might disagree, but if they do, then we have a massive chase of ordering problems vs. similar constructs all over the tree ahead of us.>From the interrupt perspective the sequence:disable_irq(); vp_dev->intx_soft_enabled = true; enable_irq(); is perfectly fine as well. Any interrupt arriving during the disabled section will be reraised on enable_irq() in hardware because it's a level interrupt. Any resulting failure is either a hardware or a hypervisor bug. Thanks, tglx