On Thu, Mar 24, 2022 at 11:46:02PM +0900, Suwan Kim
wrote:> On Thu, Mar 24, 2022 at 10:32:02AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Mar 24, 2022 at 11:04:49PM +0900, Suwan Kim wrote:
> > > This patch supports polling I/O via virtio-blk driver. Polling
> > > feature is enabled by module parameter
"num_poll_queues" and it
> > > sets dedicated polling queues for virtio-blk. This patch improves
> > > the polling I/O throughput and latency.
> > >
> > > The virtio-blk driver doesn't not have a poll function and a
poll
> > > queue and it has been operating in interrupt driven method even
if
> > > the polling function is called in the upper layer.
> > >
> > > virtio-blk polling is implemented upon 'batched
completion' of block
> > > layer. virtblk_poll() queues completed request to
io_comp_batch->req_list
> > > and later, virtblk_complete_batch() calls unmap function and ends
> > > the requests in batch.
> > >
> > > virtio-blk reads the number of poll queues from module parameter
> > > "num_poll_queues". If VM sets queue parameter as below,
> > > ("num-queues=N" [QEMU property],
"num_poll_queues=M" [module parameter])
> > > It allocates N virtqueues to virtio_blk->vqs[N] and it uses
[0..(N-M-1)]
> > > as default queues and [(N-M)..(N-1)] as poll queues. Unlike the
default
> > > queues, the poll queues have no callback function.
> > >
> > > Regarding HW-SW queue mapping, the default queue mapping uses the
> > > existing method that condsiders MSI irq vector. But the poll
queue
> > > doesn't have an irq, so it uses the regular blk-mq cpu
mapping.
> > >
> > > For verifying the improvement, I did Fio polling I/O performance
test
> > > with io_uring engine with the options below.
> > > (io_uring, hipri, randread, direct=1, bs=512, iodepth=64
numjobs=N)
> > > I set 4 vcpu and 4 virtio-blk queues - 2 default queues and 2
poll
> > > queues for VM.
> > >
> > > As a result, IOPS and average latency improved about 10%.
> > >
> > > Test result:
> > >
> > > - Fio io_uring poll without virtio-blk poll support
> > > -- numjobs=1 : IOPS = 339K, avg latency = 188.33us
> > > -- numjobs=2 : IOPS = 367K, avg latency = 347.33us
> > > -- numjobs=4 : IOPS = 383K, avg latency = 682.06us
> > >
> > > - Fio io_uring poll with virtio-blk poll support
> > > -- numjobs=1 : IOPS = 380K, avg latency = 167.87us
> > > -- numjobs=2 : IOPS = 409K, avg latency = 312.6us
> > > -- numjobs=4 : IOPS = 413K, avg latency = 619.72us
> > >
> > > Reported-by: kernel test robot <lkp at intel.com>
> > > Signed-off-by: Suwan Kim <suwan.kim027 at gmail.com>
> > > ---
> > > drivers/block/virtio_blk.c | 101
+++++++++++++++++++++++++++++++++++--
> > > 1 file changed, 97 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/drivers/block/virtio_blk.c
b/drivers/block/virtio_blk.c
> > > index 8c415be86732..3d16f8b753e7 100644
> > > --- a/drivers/block/virtio_blk.c
> > > +++ b/drivers/block/virtio_blk.c
> > > @@ -37,6 +37,10 @@ MODULE_PARM_DESC(num_request_queues,
> > > "0 for no limit. "
> > > "Values > nr_cpu_ids truncated to nr_cpu_ids.");
> > >
> > > +static unsigned int num_poll_queues;
> > > +module_param(num_poll_queues, uint, 0644);
> > > +MODULE_PARM_DESC(num_poll_queues, "The number of dedicated
virtqueues for polling I/O");
> > > +
> > > static int major;
> > > static DEFINE_IDA(vd_index_ida);
> > >
> >
> > Is there some way to make it work reasonably without need to set
> > module parameters? I don't see any other devices with a
num_poll_queues
> > parameter - how do they handle this?
>
> Hi Michael,
>
> NVMe driver uses module parameter.
>
> Please refer to this.
> -----
> drivers/nvme/host/pci.c
>
> static unsigned int poll_queues;
> module_param_cb(poll_queues, &io_queue_count_ops, &poll_queues,
0644);
> MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled
IO.");
> -----
>
> Regards,
> Suwan Kim
OK then. Let's maybe be consistent wrt parameter naming?
--
MST