Christoph Hellwig
2021-May-24 14:59 UTC
[PATCH 3/3] virtio_blk: implement blk_mq_ops->poll()
On Thu, May 20, 2021 at 03:13:05PM +0100, Stefan Hajnoczi wrote:> Possible drawbacks of this approach: > > - Hardware virtio_blk implementations may find virtqueue_disable_cb() > expensive since it requires DMA. If such devices become popular then > the virtio_blk driver could use a similar approach to NVMe when > VIRTIO_F_ACCESS_PLATFORM is detected in the future. > > - If a blk_poll() thread is descheduled it not only hurts polling > performance but also delays completion of non-REQ_HIPRI requests on > that virtqueue since vq notifications are disabled.Yes, I think this is a dangerous configuration. What argument exists again just using dedicated poll queues?
On 24/05/21 16:59, Christoph Hellwig wrote:> On Thu, May 20, 2021 at 03:13:05PM +0100, Stefan Hajnoczi wrote: >> Possible drawbacks of this approach: >> >> - Hardware virtio_blk implementations may find virtqueue_disable_cb() >> expensive since it requires DMA. If such devices become popular then >> the virtio_blk driver could use a similar approach to NVMe when >> VIRTIO_F_ACCESS_PLATFORM is detected in the future. >> >> - If a blk_poll() thread is descheduled it not only hurts polling >> performance but also delays completion of non-REQ_HIPRI requests on >> that virtqueue since vq notifications are disabled. > > Yes, I think this is a dangerous configuration. What argument exists > again just using dedicated poll queues?There isn't an equivalent of the admin queue in virtio-blk, which would allow the guest to configure the desired number of poll queues. The number of queues is fixed. Could the blk_poll() thread use preempt notifiers to enable/disable callbacks, for example using two new .poll_start and .end_poll callbacks to struct blk_mq_ops? Paolo
Stefan Hajnoczi
2021-May-25 13:19 UTC
[PATCH 3/3] virtio_blk: implement blk_mq_ops->poll()
On Mon, May 24, 2021 at 04:59:28PM +0200, Christoph Hellwig wrote:> On Thu, May 20, 2021 at 03:13:05PM +0100, Stefan Hajnoczi wrote: > > Possible drawbacks of this approach: > > > > - Hardware virtio_blk implementations may find virtqueue_disable_cb() > > expensive since it requires DMA. If such devices become popular then > > the virtio_blk driver could use a similar approach to NVMe when > > VIRTIO_F_ACCESS_PLATFORM is detected in the future. > > > > - If a blk_poll() thread is descheduled it not only hurts polling > > performance but also delays completion of non-REQ_HIPRI requests on > > that virtqueue since vq notifications are disabled. > > Yes, I think this is a dangerous configuration. What argument exists > again just using dedicated poll queues?Aside from what Paolo described (the lack of a hardware interface to designate polling queues), the poll_queues=N parameter needs to be added to the full guest and host software stack. Users also need to learn about this so they can enable it in all the places. This is much more hassle for users to configure. The dynamic polling mode approach requires no configuration. Paolo's suggestion is to notify the driver when irqs need to be re-enabled if the polling thread is descheduled. I actually have a prototype that achieves something similar here: https://gitlab.com/stefanha/linux/-/commits/cpuidle-haltpoll-virtqueue It's a different approach from the current patch series: the cpuidle driver provides poll_start/stop() callbacks and virtio_blk participates in cpuidle-haltpoll. That means all virtio-blk devices temporarily use polling mode while the vCPU is doing cpuidle-haltpoll polling. The neat thing about the cpuidle approach is: 1. Applications don't need to set the RWF_HIPRI flag! 2. Other drivers besides virtio_blk can participate in polling too. Maybe we should go with cpuidle polling instead? Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20210525/d4d3abf3/attachment.sig>