On 12/28/2014 03:52 PM, Michael S. Tsirkin wrote:> On Fri, Dec 26, 2014 at 10:53:42AM +0800, Jason Wang wrote: >> Hi all: >> >> This series try to share MSIX irq for each tx/rx queue pair. This is >> done through: >> >> - introducing virtio pci channel which are group of virtqueues that >> sharing a single MSIX irq (Patch 1) >> - expose channel setting to virtio core api (Patch 2) >> - try to use channel setting in virtio-net (Patch 3) >> >> For the transport that does not support channel, channel paramters >> were simply ignored. For devices that does not use channel, it can >> simply pass NULL or zero to virito core. >> >> With the patch, 1 MSIX irq were saved for each TX/RX queue pair. >> >> Please review. > How does this sharing affect performance? >Patch 3 only checks more_used() for tx ring which in fact reduces the effect of event index and may introduce more tx interrupts. After fixing this issue, tested with 1 vcpu and 1 queue. No obvious changes in performance were noticed. Thanks
Michael S. Tsirkin
2015-Jan-04 11:36 UTC
[RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs
On Sun, Jan 04, 2015 at 04:38:17PM +0800, Jason Wang wrote:> > On 12/28/2014 03:52 PM, Michael S. Tsirkin wrote: > > On Fri, Dec 26, 2014 at 10:53:42AM +0800, Jason Wang wrote: > >> Hi all: > >> > >> This series try to share MSIX irq for each tx/rx queue pair. This is > >> done through: > >> > >> - introducing virtio pci channel which are group of virtqueues that > >> sharing a single MSIX irq (Patch 1) > >> - expose channel setting to virtio core api (Patch 2) > >> - try to use channel setting in virtio-net (Patch 3) > >> > >> For the transport that does not support channel, channel paramters > >> were simply ignored. For devices that does not use channel, it can > >> simply pass NULL or zero to virito core. > >> > >> With the patch, 1 MSIX irq were saved for each TX/RX queue pair. > >> > >> Please review. > > How does this sharing affect performance? > > > > Patch 3 only checks more_used() for tx ring which in fact reduces the > effect of event index and may introduce more tx interrupts. After fixing > this issue, tested with 1 vcpu and 1 queue. No obvious changes in > performance were noticed. > > ThanksIs this with or without MQ? With MQ, it seems easy to believe as interrupts are distributed between CPUs. Without MQ, it should be possible to create UDP workloads where processing incoming and outgoing interrupts on separate CPUs is a win. -- MST
On 01/04/2015 07:36 PM, Michael S. Tsirkin wrote:> On Sun, Jan 04, 2015 at 04:38:17PM +0800, Jason Wang wrote: >> On 12/28/2014 03:52 PM, Michael S. Tsirkin wrote: >>> On Fri, Dec 26, 2014 at 10:53:42AM +0800, Jason Wang wrote: >>>> Hi all: >>>> >>>> This series try to share MSIX irq for each tx/rx queue pair. This is >>>> done through: >>>> >>>> - introducing virtio pci channel which are group of virtqueues that >>>> sharing a single MSIX irq (Patch 1) >>>> - expose channel setting to virtio core api (Patch 2) >>>> - try to use channel setting in virtio-net (Patch 3) >>>> >>>> For the transport that does not support channel, channel paramters >>>> were simply ignored. For devices that does not use channel, it can >>>> simply pass NULL or zero to virito core. >>>> >>>> With the patch, 1 MSIX irq were saved for each TX/RX queue pair. >>>> >>>> Please review. >>> How does this sharing affect performance? >>> >> Patch 3 only checks more_used() for tx ring which in fact reduces the >> effect of event index and may introduce more tx interrupts. After fixing >> this issue, tested with 1 vcpu and 1 queue. No obvious changes in >> performance were noticed. >> >> Thanks > Is this with or without MQ?Without MQ. 1 vcpu and 1 queue were used.> With MQ, it seems easy to believe as interrupts are > distributed between CPUs. > > Without MQ, it should be possible to create UDP workloads where > processing incoming and outgoing interrupts > on separate CPUs is a win.Not sure. Processing on separate CPUs may only win when the system is not busy. But if we processing a single flow in two cpus, it may lead extra lock contention and bad cache utilization. And if we really want to distribute the load, RPS/RFS could be used.
Reasonably Related Threads
- [RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs
- [RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs
- [RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs
- [RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs
- [RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs