Stefan Hajnoczi
2021-Sep-27 08:09 UTC
[PATCH 2/2] virtio-blk: set NUMA affinity for a tagset
On Sun, Sep 26, 2021 at 05:55:18PM +0300, Max Gurtovoy wrote:> To optimize performance, set the affinity of the block device tagset > according to the virtio device affinity. > > Signed-off-by: Max Gurtovoy <mgurtovoy at nvidia.com> > --- > drivers/block/virtio_blk.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > index 9b3bd083b411..1c68c3e0ebf9 100644 > --- a/drivers/block/virtio_blk.c > +++ b/drivers/block/virtio_blk.c > @@ -774,7 +774,7 @@ static int virtblk_probe(struct virtio_device *vdev) > memset(&vblk->tag_set, 0, sizeof(vblk->tag_set)); > vblk->tag_set.ops = &virtio_mq_ops; > vblk->tag_set.queue_depth = queue_depth; > - vblk->tag_set.numa_node = NUMA_NO_NODE; > + vblk->tag_set.numa_node = virtio_dev_to_node(vdev); > vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; > vblk->tag_set.cmd_size > sizeof(struct virtblk_req) +I implemented NUMA affinity in the past and could not demonstrate a performance improvement: https://lists.linuxfoundation.org/pipermail/virtualization/2020-June/048248.html The pathological case is when a guest with vNUMA has the virtio-blk-pci device on the "wrong" host NUMA node. Then memory accesses should cross NUMA nodes. Still, it didn't seem to matter. Please share your benchmark results. If you haven't collected data yet you could even combine our patches to see if it helps. Thanks! Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20210927/aa6f5349/attachment.sig>