Rusty Russell
2014-Mar-17 00:42 UTC
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
Theodore Ts'o <tytso at mit.edu> writes:> The current virtio block sets a queue depth of 64, which is > insufficient for very fast devices. It has been demonstrated that > with a high IOPS device, using a queue depth of 256 can double the > IOPS which can be sustained. > > As suggested by Venkatash Srinivas, set the queue depth by default to > be one half the the device's virtqueue, which is the maximum queue > depth that can be supported by the channel to the host OS (each I/O > request requires at least two VQ entries). > > Also allow the queue depth to be something which can be set at module > load time or via a kernel boot-time parameter, for > testing/benchmarking purposes.Note that with indirect descriptors (which is supported by Almost Everyone), we can actually use the full index, so this value is a bit pessimistic. But it's OK as a starting point. Cheers, Rusty.
tytso at mit.edu
2014-Mar-17 05:40 UTC
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:> > Note that with indirect descriptors (which is supported by Almost > Everyone), we can actually use the full index, so this value is a bit > pessimistic. But it's OK as a starting point.So is this something that can go upstream with perhaps a slight adjustment in the commit description? Do you think we need to be able to dynamically adjust the queue depth after the module has been loaded or the kernel has been booted? If so, anyone a hint about the best way to do that would be much appreciated. Thanks, - Ted
Rusty Russell
2014-Mar-19 06:28 UTC
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
tytso at mit.edu writes:> On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote: >> >> Note that with indirect descriptors (which is supported by Almost >> Everyone), we can actually use the full index, so this value is a bit >> pessimistic. But it's OK as a starting point. > > So is this something that can go upstream with perhaps a slight > adjustment in the commit description?Well, I rewrote it again, see below.> Do you think we need to be able > to dynamically adjust the queue depth after the module has been loaded > or the kernel has been booted?That would be nice, sure, but...> If so, anyone a hint about the best > way to do that would be much appreciated.... I share your wonder and mystery at the ways of the block layer. Subject: virtio-blk: base queue-depth on virtqueue ringsize or module param Venkatash spake thus: virtio-blk set the default queue depth to 64 requests, which was insufficient for high-IOPS devices. Instead set the blk-queue depth to the device's virtqueue depth divided by two (each I/O requires at least two VQ entries). But behold, Ted added a module parameter: Also allow the queue depth to be something which can be set at module load time or via a kernel boot-time parameter, for testing/benchmarking purposes. And I rewrote it substantially, mainly to take VIRTIO_RING_F_INDIRECT_DESC into account. As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't have made a change. This version does (since QEMU also offers VIRTIO_RING_F_INDIRECT_DESC. Inspired-by: "Theodore Ts'o" <tytso at mit.edu> Based-on-the-true-story-of: Venkatesh Srinivas <venkateshs at google.com> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: virtio-dev at lists.oasis-open.org Cc: virtualization at lists.linux-foundation.org Cc: Frank Swiderski <fes at google.com> Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index a2db9ed288f2..c101bbc72095 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -491,10 +491,11 @@ static struct blk_mq_ops virtio_mq_ops = { static struct blk_mq_reg virtio_mq_reg = { .ops = &virtio_mq_ops, .nr_hw_queues = 1, - .queue_depth = 64, + .queue_depth = 0, /* Set in virtblk_probe */ .numa_node = NUMA_NO_NODE, .flags = BLK_MQ_F_SHOULD_MERGE, }; +module_param_named(queue_depth, virtio_mq_reg.queue_depth, uint, 0444); static void virtblk_init_vbr(void *data, struct blk_mq_hw_ctx *hctx, struct request *rq, unsigned int nr) @@ -558,6 +559,13 @@ static int virtblk_probe(struct virtio_device *vdev) goto out_free_vq; } + /* Default queue sizing is to fill the ring. */ + if (!virtio_mq_reg.queue_depth) { + virtio_mq_reg.queue_depth = vblk->vq->num_free; + /* ... but without indirect descs, we use 2 descs per req */ + if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC)) + virtio_mq_reg.queue_depth /= 2; + } virtio_mq_reg.cmd_size sizeof(struct virtblk_req) + sizeof(struct scatterlist) * sg_elems;
Maybe Matching Threads
- [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
- [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
- [PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
- [PATCH] virtio-blk: Initialize blkqueue depth from virtqueue size
- [PATCH] virtio-blk: Initialize blkqueue depth from virtqueue size