The virtio spec already supports the virtio queue reset function. This patch set is to add this function to the kernel. The relevant virtio spec information is here: https://github.com/oasis-tcs/virtio-spec/issues/124 Also regarding MMIO support for queue reset, I plan to support it after this patch is passed. Performing reset on a queue is divided into four steps: 1. virtio_reset_vq() - notify the device to reset the queue 2. virtqueue_detach_unused_buf() - recycle the buffer submitted 3. virtqueue_reset_vring() - reset the vring (may re-alloc) 4. virtio_enable_resetq() - mmap vring to device, and enable the queue The first part 1-17 of this patch set implements virtio pci's support and API for queue reset. The latter part is to make virtio-net support set_ringparam. Do these things for this feature: 1. virtio-net support rx,tx reset 2. find_vqs() support to special the max size of each vq 3. virtio-net support set_ringparam #1 -#3 : prepare #4 -#12: virtio ring support reset vring of the vq #13-#14: add helper #15-#17: virtio pci support reset queue and re-enable #18-#21: find_vqs() support sizes to special the max size of each vq #23-#24: virtio-net support rx, tx reset #22, #25, #26: virtio-net support set ringparam Test environment: Host: 4.19.91 Qemu: QEMU emulator version 6.2.50 (with vq reset support) Test Cmd: ethtool -G eth1 rx $1 tx $2; ethtool -g eth1 The default is split mode, modify Qemu virtio-net to add PACKED feature to test packed mode. Please review. Thanks. v7: 1. fix #6 subject typo 2. fix #6 ring_size_in_bytes is uninitialized 3. check by: make W=12 v6: 1. virtio_pci: use synchronize_irq(irq) to sync the irq callbacks 2. Introduce virtqueue_reset_vring() to implement the reset of vring during the reset process. May use the old vring if num of the vq not change. 3. find_vqs() support sizes to special the max size of each vq v5: 1. add virtio-net support set_ringparam v4: 1. just the code of virtio, without virtio-net 2. Performing reset on a queue is divided into these steps: 1. reset_vq: reset one vq 2. recycle the buffer from vq by virtqueue_detach_unused_buf() 3. release the ring of the vq by vring_release_virtqueue() 4. enable_reset_vq: re-enable the reset queue 3. Simplify the parameters of enable_reset_vq() 4. add container structures for virtio_pci_common_cfg v3: 1. keep vq, irq unreleased *** BLURB HERE *** Xuan Zhuo (26): virtio_pci: struct virtio_pci_common_cfg add queue_notify_data virtio: queue_reset: add VIRTIO_F_RING_RESET virtio: add helper virtqueue_get_vring_max_size() virtio_ring: split: extract the logic of creating vring virtio_ring: split: extract the logic of init vq and attach vring virtio_ring: packed: extract the logic of creating vring virtio_ring: packed: extract the logic of init vq and attach vring virtio_ring: extract the logic of freeing vring virtio_ring: split: implement virtqueue_reset_vring_split() virtio_ring: packed: implement virtqueue_reset_vring_packed() virtio_ring: introduce virtqueue_reset_vring() virtio_ring: update the document of the virtqueue_detach_unused_buf for queue reset virtio: queue_reset: struct virtio_config_ops add callbacks for queue_reset virtio: add helper for queue reset virtio_pci: queue_reset: update struct virtio_pci_common_cfg and option functions virtio_pci: queue_reset: extract the logic of active vq for modern pci virtio_pci: queue_reset: support VIRTIO_F_RING_RESET virtio: find_vqs() add arg sizes virtio_pci: support the arg sizes of find_vqs() virtio_mmio: support the arg sizes of find_vqs() virtio: add helper virtio_find_vqs_ctx_size() virtio_net: get ringparam by virtqueue_get_vring_max_size() virtio_net: split free_unused_bufs() virtio_net: support rx/tx queue reset virtio_net: set the default max ring size by find_vqs() virtio_net: support set_ringparam arch/um/drivers/virtio_uml.c | 2 +- drivers/net/virtio_net.c | 257 ++++++++-- drivers/platform/mellanox/mlxbf-tmfifo.c | 3 +- drivers/remoteproc/remoteproc_virtio.c | 2 +- drivers/s390/virtio/virtio_ccw.c | 2 +- drivers/virtio/virtio_mmio.c | 12 +- drivers/virtio/virtio_pci_common.c | 28 +- drivers/virtio/virtio_pci_common.h | 3 +- drivers/virtio/virtio_pci_legacy.c | 8 +- drivers/virtio/virtio_pci_modern.c | 146 +++++- drivers/virtio/virtio_pci_modern_dev.c | 36 ++ drivers/virtio/virtio_ring.c | 584 +++++++++++++++++------ drivers/virtio/virtio_vdpa.c | 2 +- include/linux/virtio.h | 12 + include/linux/virtio_config.h | 74 ++- include/linux/virtio_pci_modern.h | 2 + include/uapi/linux/virtio_config.h | 7 +- include/uapi/linux/virtio_pci.h | 14 + 18 files changed, 979 insertions(+), 215 deletions(-) -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:34 UTC
[PATCH v7 01/26] virtio_pci: struct virtio_pci_common_cfg add queue_notify_data
Add queue_notify_data in struct virtio_pci_common_cfg, which comes from here https://github.com/oasis-tcs/virtio-spec/issues/89 For not breaks uABI, add a new struct virtio_pci_common_cfg_notify. Since I want to add queue_reset after queue_notify_data, I submitted this patch first. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- include/uapi/linux/virtio_pci.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/uapi/linux/virtio_pci.h b/include/uapi/linux/virtio_pci.h index 3a86f36d7e3d..22bec9bd0dfc 100644 --- a/include/uapi/linux/virtio_pci.h +++ b/include/uapi/linux/virtio_pci.h @@ -166,6 +166,13 @@ struct virtio_pci_common_cfg { __le32 queue_used_hi; /* read-write */ }; +struct virtio_pci_common_cfg_notify { + struct virtio_pci_common_cfg cfg; + + __le16 queue_notify_data; /* read-write */ + __le16 padding; +}; + /* Fields in VIRTIO_PCI_CAP_PCI_CFG: */ struct virtio_pci_cfg_cap { struct virtio_pci_cap cap; -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:34 UTC
[PATCH v7 02/26] virtio: queue_reset: add VIRTIO_F_RING_RESET
Added VIRTIO_F_RING_RESET, it came from here https://github.com/oasis-tcs/virtio-spec/issues/124 Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- include/uapi/linux/virtio_config.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h index b5eda06f0d57..0862be802ff8 100644 --- a/include/uapi/linux/virtio_config.h +++ b/include/uapi/linux/virtio_config.h @@ -52,7 +52,7 @@ * rest are per-device feature bits. */ #define VIRTIO_TRANSPORT_F_START 28 -#define VIRTIO_TRANSPORT_F_END 38 +#define VIRTIO_TRANSPORT_F_END 41 #ifndef VIRTIO_CONFIG_NO_LEGACY /* Do we get callbacks when the ring is completely used, even if we've @@ -92,4 +92,9 @@ * Does the device support Single Root I/O Virtualization? */ #define VIRTIO_F_SR_IOV 37 + +/* + * This feature indicates that the driver can reset a queue individually. + */ +#define VIRTIO_F_RING_RESET 40 #endif /* _UAPI_LINUX_VIRTIO_CONFIG_H */ -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:34 UTC
[PATCH v7 03/26] virtio: add helper virtqueue_get_vring_max_size()
Record the maximum queue num supported by the device. virtio-net can display the maximum (supported by hardware) ring size in ethtool -g eth0. When the subsequent patch implements vring reset, it can judge whether the ring size passed by the driver is legal based on this. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_mmio.c | 2 ++ drivers/virtio/virtio_pci_legacy.c | 2 ++ drivers/virtio/virtio_pci_modern.c | 2 ++ drivers/virtio/virtio_ring.c | 14 ++++++++++++++ include/linux/virtio.h | 2 ++ 5 files changed, 22 insertions(+) diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index 56128b9c46eb..a41abc8051b9 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -390,6 +390,8 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index, goto error_new_virtqueue; } + vq->num_max = num; + /* Activate the queue */ writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM); if (vm_dev->version == 1) { diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c index 34141b9abe27..b68934fe6b5d 100644 --- a/drivers/virtio/virtio_pci_legacy.c +++ b/drivers/virtio/virtio_pci_legacy.c @@ -135,6 +135,8 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev, if (!vq) return ERR_PTR(-ENOMEM); + vq->num_max = num; + q_pfn = virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT; if (q_pfn >> 32) { dev_err(&vp_dev->pci_dev->dev, diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index 5455bc041fb6..86d301f272b8 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -218,6 +218,8 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev, if (!vq) return ERR_PTR(-ENOMEM); + vq->num_max = num; + /* activate the queue */ vp_modern_set_queue_size(mdev, index, virtqueue_get_vring_size(vq)); vp_modern_queue_address(mdev, index, virtqueue_get_desc_addr(vq), diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 962f1477b1fa..b87130c8f312 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2371,6 +2371,20 @@ void vring_transport_features(struct virtio_device *vdev) } EXPORT_SYMBOL_GPL(vring_transport_features); +/** + * virtqueue_get_vring_max_size - return the max size of the virtqueue's vring + * @_vq: the struct virtqueue containing the vring of interest. + * + * Returns the max size of the vring. + * + * Unlike other operations, this need not be serialized. + */ +unsigned int virtqueue_get_vring_max_size(struct virtqueue *_vq) +{ + return _vq->num_max; +} +EXPORT_SYMBOL_GPL(virtqueue_get_vring_max_size); + /** * virtqueue_get_vring_size - return the size of the virtqueue's vring * @_vq: the struct virtqueue containing the vring of interest. diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 72292a62cd90..d59adc4be068 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -31,6 +31,7 @@ struct virtqueue { struct virtio_device *vdev; unsigned int index; unsigned int num_free; + unsigned int num_max; void *priv; }; @@ -80,6 +81,7 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *vq); void *virtqueue_detach_unused_buf(struct virtqueue *vq); +unsigned int virtqueue_get_vring_max_size(struct virtqueue *vq); unsigned int virtqueue_get_vring_size(struct virtqueue *vq); bool virtqueue_is_broken(struct virtqueue *vq); -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:34 UTC
[PATCH v7 04/26] virtio_ring: split: extract the logic of creating vring
Separate the logic of split to create vring queue. For the convenience of passing parameters, add a structure vring_split. This feature is required for subsequent virtuqueue reset vring. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_ring.c | 74 +++++++++++++++++++++++++----------- 1 file changed, 51 insertions(+), 23 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index b87130c8f312..d32793615451 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -85,6 +85,13 @@ struct vring_desc_extra { u16 next; /* The next desc state in a list. */ }; +struct vring_split { + void *queue; + dma_addr_t dma_addr; + size_t queue_size_in_bytes; + struct vring vring; +}; + struct vring_virtqueue { struct virtqueue vq; @@ -915,28 +922,21 @@ static void *virtqueue_detach_unused_buf_split(struct virtqueue *_vq) return NULL; } -static struct virtqueue *vring_create_virtqueue_split( - unsigned int index, - unsigned int num, - unsigned int vring_align, - struct virtio_device *vdev, - bool weak_barriers, - bool may_reduce_num, - bool context, - bool (*notify)(struct virtqueue *), - void (*callback)(struct virtqueue *), - const char *name) +static int vring_create_vring_split(struct vring_split *vring, + struct virtio_device *vdev, + unsigned int vring_align, + bool weak_barriers, + bool may_reduce_num, + u32 num) { - struct virtqueue *vq; void *queue = NULL; dma_addr_t dma_addr; size_t queue_size_in_bytes; - struct vring vring; /* We assume num is a power of 2. */ if (num & (num - 1)) { dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num); - return NULL; + return -EINVAL; } /* TODO: allocate each queue chunk individually */ @@ -947,11 +947,11 @@ static struct virtqueue *vring_create_virtqueue_split( if (queue) break; if (!may_reduce_num) - return NULL; + return -ENOMEM; } if (!num) - return NULL; + return -ENOMEM; if (!queue) { /* Try to get a single page. You are my only hope! */ @@ -959,21 +959,49 @@ static struct virtqueue *vring_create_virtqueue_split( &dma_addr, GFP_KERNEL|__GFP_ZERO); } if (!queue) - return NULL; + return -ENOMEM; queue_size_in_bytes = vring_size(num, vring_align); - vring_init(&vring, num, queue, vring_align); + vring_init(&vring->vring, num, queue, vring_align); + + vring->dma_addr = dma_addr; + vring->queue = queue; + vring->queue_size_in_bytes = queue_size_in_bytes; + + return 0; +} + +static struct virtqueue *vring_create_virtqueue_split( + unsigned int index, + unsigned int num, + unsigned int vring_align, + struct virtio_device *vdev, + bool weak_barriers, + bool may_reduce_num, + bool context, + bool (*notify)(struct virtqueue *), + void (*callback)(struct virtqueue *), + const char *name) +{ + struct vring_split vring; + struct virtqueue *vq; + int err; + + err = vring_create_vring_split(&vring, vdev, vring_align, weak_barriers, + may_reduce_num, num); + if (err) + return NULL; - vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers, context, + vq = __vring_new_virtqueue(index, vring.vring, vdev, weak_barriers, context, notify, callback, name); if (!vq) { - vring_free_queue(vdev, queue_size_in_bytes, queue, - dma_addr); + vring_free_queue(vdev, vring.queue_size_in_bytes, vring.queue, + vring.dma_addr); return NULL; } - to_vvq(vq)->split.queue_dma_addr = dma_addr; - to_vvq(vq)->split.queue_size_in_bytes = queue_size_in_bytes; + to_vvq(vq)->split.queue_dma_addr = vring.dma_addr; + to_vvq(vq)->split.queue_size_in_bytes = vring.queue_size_in_bytes; to_vvq(vq)->we_own_ring = true; return vq; -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:34 UTC
[PATCH v7 05/26] virtio_ring: split: extract the logic of init vq and attach vring
Split the logic of split assignment vq into three parts. 1. The assignment passed from the function parameter 2. The part that attaches vring to vq. -- __vring_virtqueue_attach_split() 3. The part that initializes vq to a fixed value -- __vring_virtqueue_init_split() This feature is required for subsequent virtuqueue reset vring Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_ring.c | 111 +++++++++++++++++++++-------------- 1 file changed, 67 insertions(+), 44 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index d32793615451..dc6313b79305 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2196,34 +2196,40 @@ irqreturn_t vring_interrupt(int irq, void *_vq) } EXPORT_SYMBOL_GPL(vring_interrupt); -/* Only available for split ring */ -struct virtqueue *__vring_new_virtqueue(unsigned int index, - struct vring vring, - struct virtio_device *vdev, - bool weak_barriers, - bool context, - bool (*notify)(struct virtqueue *), - void (*callback)(struct virtqueue *), - const char *name) +static int __vring_virtqueue_attach_split(struct vring_virtqueue *vq, + struct virtio_device *vdev, + struct vring vring) { - struct vring_virtqueue *vq; + vq->vq.num_free = vring.num; - if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) - return NULL; + vq->split.vring = vring; + vq->split.queue_dma_addr = 0; + vq->split.queue_size_in_bytes = 0; - vq = kmalloc(sizeof(*vq), GFP_KERNEL); - if (!vq) - return NULL; + vq->split.desc_state = kmalloc_array(vring.num, + sizeof(struct vring_desc_state_split), GFP_KERNEL); + if (!vq->split.desc_state) + goto err_state; + vq->split.desc_extra = vring_alloc_desc_extra(vq, vring.num); + if (!vq->split.desc_extra) + goto err_extra; + + memset(vq->split.desc_state, 0, vring.num * + sizeof(struct vring_desc_state_split)); + return 0; + +err_extra: + kfree(vq->split.desc_state); +err_state: + return -ENOMEM; +} + +static void __vring_virtqueue_init_split(struct vring_virtqueue *vq, + struct virtio_device *vdev) +{ vq->packed_ring = false; - vq->vq.callback = callback; - vq->vq.vdev = vdev; - vq->vq.name = name; - vq->vq.num_free = vring.num; - vq->vq.index = index; vq->we_own_ring = false; - vq->notify = notify; - vq->weak_barriers = weak_barriers; vq->broken = false; vq->last_used_idx = 0; vq->event_triggered = false; @@ -2234,50 +2240,67 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, vq->last_add_time_valid = false; #endif - vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && - !context; vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); if (virtio_has_feature(vdev, VIRTIO_F_ORDER_PLATFORM)) vq->weak_barriers = false; - vq->split.queue_dma_addr = 0; - vq->split.queue_size_in_bytes = 0; - - vq->split.vring = vring; vq->split.avail_flags_shadow = 0; vq->split.avail_idx_shadow = 0; /* No callback? Tell other side not to bother us. */ - if (!callback) { + if (!vq->vq.callback) { vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; if (!vq->event) vq->split.vring.avail->flags = cpu_to_virtio16(vdev, vq->split.avail_flags_shadow); } - vq->split.desc_state = kmalloc_array(vring.num, - sizeof(struct vring_desc_state_split), GFP_KERNEL); - if (!vq->split.desc_state) - goto err_state; - - vq->split.desc_extra = vring_alloc_desc_extra(vq, vring.num); - if (!vq->split.desc_extra) - goto err_extra; - /* Put everything in free lists. */ vq->free_head = 0; - memset(vq->split.desc_state, 0, vring.num * - sizeof(struct vring_desc_state_split)); +} + +/* Only available for split ring */ +struct virtqueue *__vring_new_virtqueue(unsigned int index, + struct vring vring, + struct virtio_device *vdev, + bool weak_barriers, + bool context, + bool (*notify)(struct virtqueue *), + void (*callback)(struct virtqueue *), + const char *name) +{ + struct vring_virtqueue *vq; + int err; + + if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) + return NULL; + + vq = kmalloc(sizeof(*vq), GFP_KERNEL); + if (!vq) + return NULL; + + vq->vq.callback = callback; + vq->vq.vdev = vdev; + vq->vq.name = name; + vq->vq.index = index; + vq->notify = notify; + vq->weak_barriers = weak_barriers; + vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && + !context; + + err = __vring_virtqueue_attach_split(vq, vdev, vring); + if (err) + goto err; + + __vring_virtqueue_init_split(vq, vdev); spin_lock(&vdev->vqs_list_lock); list_add_tail(&vq->vq.list, &vdev->vqs); spin_unlock(&vdev->vqs_list_lock); - return &vq->vq; -err_extra: - kfree(vq->split.desc_state); -err_state: + return &vq->vq; +err: kfree(vq); return NULL; } -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:34 UTC
[PATCH v7 06/26] virtio_ring: packed: extract the logic of creating vring
Separate the logic of packed to create vring queue. For the convenience of passing parameters, add a structure vring_packed. This feature is required for subsequent virtuqueue reset vring. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_ring.c | 121 ++++++++++++++++++++++++++--------- 1 file changed, 92 insertions(+), 29 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index dc6313b79305..1af98b112996 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -92,6 +92,18 @@ struct vring_split { struct vring vring; }; +struct vring_packed { + u32 num; + struct vring_packed_desc *ring; + struct vring_packed_desc_event *driver; + struct vring_packed_desc_event *device; + dma_addr_t ring_dma_addr; + dma_addr_t driver_event_dma_addr; + dma_addr_t device_event_dma_addr; + size_t ring_size_in_bytes; + size_t event_size_in_bytes; +}; + struct vring_virtqueue { struct virtqueue vq; @@ -1683,45 +1695,101 @@ static struct vring_desc_extra *vring_alloc_desc_extra(struct vring_virtqueue *v return desc_extra; } -static struct virtqueue *vring_create_virtqueue_packed( - unsigned int index, - unsigned int num, - unsigned int vring_align, - struct virtio_device *vdev, - bool weak_barriers, - bool may_reduce_num, - bool context, - bool (*notify)(struct virtqueue *), - void (*callback)(struct virtqueue *), - const char *name) +static void vring_free_vring_packed(struct vring_packed *vring, + struct virtio_device *vdev) +{ + dma_addr_t ring_dma_addr, driver_event_dma_addr, device_event_dma_addr; + struct vring_packed_desc_event *driver, *device; + size_t ring_size_in_bytes, event_size_in_bytes; + struct vring_packed_desc *ring; + + ring = vring->ring; + driver = vring->driver; + device = vring->device; + ring_size_in_bytes = vring->ring_size_in_bytes; + event_size_in_bytes = vring->event_size_in_bytes; + ring_dma_addr = vring->ring_dma_addr; + driver_event_dma_addr = vring->driver_event_dma_addr; + device_event_dma_addr = vring->device_event_dma_addr; + + if (device) + vring_free_queue(vdev, event_size_in_bytes, device, device_event_dma_addr); + + if (driver) + vring_free_queue(vdev, event_size_in_bytes, driver, driver_event_dma_addr); + + if (ring) + vring_free_queue(vdev, ring_size_in_bytes, ring, ring_dma_addr); +} + +static int vring_create_vring_packed(struct vring_packed *vring, + struct virtio_device *vdev, + u32 num) { - struct vring_virtqueue *vq; struct vring_packed_desc *ring; struct vring_packed_desc_event *driver, *device; dma_addr_t ring_dma_addr, driver_event_dma_addr, device_event_dma_addr; size_t ring_size_in_bytes, event_size_in_bytes; + memset(vring, 0, sizeof(*vring)); + ring_size_in_bytes = num * sizeof(struct vring_packed_desc); ring = vring_alloc_queue(vdev, ring_size_in_bytes, &ring_dma_addr, GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO); if (!ring) - goto err_ring; + goto err; + + vring->num = num; + vring->ring = ring; + vring->ring_size_in_bytes = ring_size_in_bytes; + vring->ring_dma_addr = ring_dma_addr; event_size_in_bytes = sizeof(struct vring_packed_desc_event); + vring->event_size_in_bytes = event_size_in_bytes; driver = vring_alloc_queue(vdev, event_size_in_bytes, &driver_event_dma_addr, GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO); if (!driver) - goto err_driver; + goto err; + + vring->driver = driver; + vring->driver_event_dma_addr = driver_event_dma_addr; device = vring_alloc_queue(vdev, event_size_in_bytes, &device_event_dma_addr, GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO); if (!device) - goto err_device; + goto err; + + vring->device = device; + vring->device_event_dma_addr = device_event_dma_addr; + return 0; + +err: + vring_free_vring_packed(vring, vdev); + return -ENOMEM; +} + +static struct virtqueue *vring_create_virtqueue_packed( + unsigned int index, + unsigned int num, + unsigned int vring_align, + struct virtio_device *vdev, + bool weak_barriers, + bool may_reduce_num, + bool context, + bool (*notify)(struct virtqueue *), + void (*callback)(struct virtqueue *), + const char *name) +{ + struct vring_virtqueue *vq; + struct vring_packed vring; + + if (vring_create_vring_packed(&vring, vdev, num)) + goto err_vq; vq = kmalloc(sizeof(*vq), GFP_KERNEL); if (!vq) @@ -1753,17 +1821,17 @@ static struct virtqueue *vring_create_virtqueue_packed( if (virtio_has_feature(vdev, VIRTIO_F_ORDER_PLATFORM)) vq->weak_barriers = false; - vq->packed.ring_dma_addr = ring_dma_addr; - vq->packed.driver_event_dma_addr = driver_event_dma_addr; - vq->packed.device_event_dma_addr = device_event_dma_addr; + vq->packed.ring_dma_addr = vring.ring_dma_addr; + vq->packed.driver_event_dma_addr = vring.driver_event_dma_addr; + vq->packed.device_event_dma_addr = vring.device_event_dma_addr; - vq->packed.ring_size_in_bytes = ring_size_in_bytes; - vq->packed.event_size_in_bytes = event_size_in_bytes; + vq->packed.ring_size_in_bytes = vring.ring_size_in_bytes; + vq->packed.event_size_in_bytes = vring.event_size_in_bytes; vq->packed.vring.num = num; - vq->packed.vring.desc = ring; - vq->packed.vring.driver = driver; - vq->packed.vring.device = device; + vq->packed.vring.desc = vring.ring; + vq->packed.vring.driver = vring.driver; + vq->packed.vring.device = vring.device; vq->packed.next_avail_idx = 0; vq->packed.avail_wrap_counter = 1; @@ -1804,12 +1872,7 @@ static struct virtqueue *vring_create_virtqueue_packed( err_desc_state: kfree(vq); err_vq: - vring_free_queue(vdev, event_size_in_bytes, device, device_event_dma_addr); -err_device: - vring_free_queue(vdev, event_size_in_bytes, driver, driver_event_dma_addr); -err_driver: - vring_free_queue(vdev, ring_size_in_bytes, ring, ring_dma_addr); -err_ring: + vring_free_vring_packed(&vring, vdev); return NULL; } -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:34 UTC
[PATCH v7 07/26] virtio_ring: packed: extract the logic of init vq and attach vring
Split the logic of packed assignment vq into three parts. 1. The assignment passed from the function parameter 2. The part that attaches vring to vq. -- vring_virtqueue_attach_packed() 3. The part that initializes vq to a fixed value -- vring_virtqueue_init_packed() This feature is required for subsequent virtuqueue reset vring Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_ring.c | 138 +++++++++++++++++++++-------------- 1 file changed, 82 insertions(+), 56 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 1af98b112996..b5a9bf4f45b3 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -1773,36 +1773,53 @@ static int vring_create_vring_packed(struct vring_packed *vring, return -ENOMEM; } -static struct virtqueue *vring_create_virtqueue_packed( - unsigned int index, - unsigned int num, - unsigned int vring_align, - struct virtio_device *vdev, - bool weak_barriers, - bool may_reduce_num, - bool context, - bool (*notify)(struct virtqueue *), - void (*callback)(struct virtqueue *), - const char *name) +static int vring_virtqueue_attach_packed(struct vring_virtqueue *vq, + struct vring_packed *vring, + struct virtio_device *vdev) { - struct vring_virtqueue *vq; - struct vring_packed vring; - - if (vring_create_vring_packed(&vring, vdev, num)) - goto err_vq; + u32 num; - vq = kmalloc(sizeof(*vq), GFP_KERNEL); - if (!vq) - goto err_vq; + num = vring->num; - vq->vq.callback = callback; - vq->vq.vdev = vdev; - vq->vq.name = name; vq->vq.num_free = num; - vq->vq.index = index; + + vq->packed.ring_dma_addr = vring->ring_dma_addr; + vq->packed.driver_event_dma_addr = vring->driver_event_dma_addr; + vq->packed.device_event_dma_addr = vring->device_event_dma_addr; + + vq->packed.ring_size_in_bytes = vring->ring_size_in_bytes; + vq->packed.event_size_in_bytes = vring->event_size_in_bytes; + + vq->packed.vring.num = num; + vq->packed.vring.desc = vring->ring; + vq->packed.vring.driver = vring->driver; + vq->packed.vring.device = vring->device; + + vq->packed.desc_state = kmalloc_array(num, + sizeof(struct vring_desc_state_packed), + GFP_KERNEL); + if (!vq->packed.desc_state) + goto err_desc_state; + + memset(vq->packed.desc_state, 0, + num * sizeof(struct vring_desc_state_packed)); + + vq->packed.desc_extra = vring_alloc_desc_extra(vq, num); + if (!vq->packed.desc_extra) + goto err_desc_extra; + + return 0; + +err_desc_extra: + kfree(vq->packed.desc_state); +err_desc_state: + return -ENOMEM; +} + +static void vring_virtqueue_init_packed(struct vring_virtqueue *vq, + struct virtio_device *vdev) +{ vq->we_own_ring = true; - vq->notify = notify; - vq->weak_barriers = weak_barriers; vq->broken = false; vq->last_used_idx = 0; vq->event_triggered = false; @@ -1814,62 +1831,71 @@ static struct virtqueue *vring_create_virtqueue_packed( vq->last_add_time_valid = false; #endif - vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && - !context; vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); if (virtio_has_feature(vdev, VIRTIO_F_ORDER_PLATFORM)) vq->weak_barriers = false; - vq->packed.ring_dma_addr = vring.ring_dma_addr; - vq->packed.driver_event_dma_addr = vring.driver_event_dma_addr; - vq->packed.device_event_dma_addr = vring.device_event_dma_addr; - - vq->packed.ring_size_in_bytes = vring.ring_size_in_bytes; - vq->packed.event_size_in_bytes = vring.event_size_in_bytes; - - vq->packed.vring.num = num; - vq->packed.vring.desc = vring.ring; - vq->packed.vring.driver = vring.driver; - vq->packed.vring.device = vring.device; - vq->packed.next_avail_idx = 0; vq->packed.avail_wrap_counter = 1; vq->packed.used_wrap_counter = 1; vq->packed.event_flags_shadow = 0; vq->packed.avail_used_flags = 1 << VRING_PACKED_DESC_F_AVAIL; - vq->packed.desc_state = kmalloc_array(num, - sizeof(struct vring_desc_state_packed), - GFP_KERNEL); - if (!vq->packed.desc_state) - goto err_desc_state; - - memset(vq->packed.desc_state, 0, - num * sizeof(struct vring_desc_state_packed)); - /* Put everything in free lists. */ vq->free_head = 0; - vq->packed.desc_extra = vring_alloc_desc_extra(vq, num); - if (!vq->packed.desc_extra) - goto err_desc_extra; - /* No callback? Tell other side not to bother us. */ - if (!callback) { + if (!vq->vq.callback) { vq->packed.event_flags_shadow = VRING_PACKED_EVENT_FLAG_DISABLE; vq->packed.vring.driver->flags cpu_to_le16(vq->packed.event_flags_shadow); } +} + +static struct virtqueue *vring_create_virtqueue_packed( + unsigned int index, + unsigned int num, + unsigned int vring_align, + struct virtio_device *vdev, + bool weak_barriers, + bool may_reduce_num, + bool context, + bool (*notify)(struct virtqueue *), + void (*callback)(struct virtqueue *), + const char *name) +{ + struct vring_virtqueue *vq; + struct vring_packed vring; + + if (vring_create_vring_packed(&vring, vdev, num)) + goto err_vq; + + vq = kmalloc(sizeof(*vq), GFP_KERNEL); + if (!vq) + goto err_vq; + + vq->vq.callback = callback; + vq->vq.vdev = vdev; + vq->vq.name = name; + vq->vq.index = index; + vq->notify = notify; + vq->weak_barriers = weak_barriers; + vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && + !context; + + if (vring_virtqueue_attach_packed(vq, &vring, vdev)) + goto err; + + vring_virtqueue_init_packed(vq, vdev); spin_lock(&vdev->vqs_list_lock); list_add_tail(&vq->vq.list, &vdev->vqs); spin_unlock(&vdev->vqs_list_lock); + return &vq->vq; -err_desc_extra: - kfree(vq->packed.desc_state); -err_desc_state: +err: kfree(vq); err_vq: vring_free_vring_packed(&vring, vdev); -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 08/26] virtio_ring: extract the logic of freeing vring
Introduce vring_free() to free the vring of vq. Prevent double free by setting vq->reset. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_ring.c | 25 ++++++++++++++++++++----- include/linux/virtio.h | 8 ++++++++ 2 files changed, 28 insertions(+), 5 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index b5a9bf4f45b3..e0422c04c903 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2442,14 +2442,10 @@ struct virtqueue *vring_new_virtqueue(unsigned int index, } EXPORT_SYMBOL_GPL(vring_new_virtqueue); -void vring_del_virtqueue(struct virtqueue *_vq) +static void __vring_free(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); - spin_lock(&vq->vq.vdev->vqs_list_lock); - list_del(&_vq->list); - spin_unlock(&vq->vq.vdev->vqs_list_lock); - if (vq->we_own_ring) { if (vq->packed_ring) { vring_free_queue(vq->vq.vdev, @@ -2480,6 +2476,25 @@ void vring_del_virtqueue(struct virtqueue *_vq) kfree(vq->split.desc_state); kfree(vq->split.desc_extra); } +} + +static void vring_free(struct virtqueue *vq) +{ + __vring_free(vq); + vq->reset = VIRTIO_VQ_RESET_STEP_VRING_RELEASE; +} + +void vring_del_virtqueue(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + spin_lock(&vq->vq.vdev->vqs_list_lock); + list_del(&_vq->list); + spin_unlock(&vq->vq.vdev->vqs_list_lock); + + if (_vq->reset != VIRTIO_VQ_RESET_STEP_VRING_RELEASE) + __vring_free(_vq); + kfree(vq); } EXPORT_SYMBOL_GPL(vring_del_virtqueue); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index d59adc4be068..e3714e6db330 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -10,6 +10,13 @@ #include <linux/mod_devicetable.h> #include <linux/gfp.h> +enum virtio_vq_reset_step { + VIRTIO_VQ_RESET_STEP_NONE, + VIRTIO_VQ_RESET_STEP_DEVICE, + VIRTIO_VQ_RESET_STEP_VRING_RELEASE, + VIRTIO_VQ_RESET_STEP_VRING_ATTACH, +}; + /** * virtqueue - a queue to register buffers for sending or receiving. * @list: the chain of virtqueues for this device @@ -33,6 +40,7 @@ struct virtqueue { unsigned int num_free; unsigned int num_max; void *priv; + enum virtio_vq_reset_step reset; }; int virtqueue_add_outbuf(struct virtqueue *vq, -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 09/26] virtio_ring: split: implement virtqueue_reset_vring_split()
virtio ring supports reset. Queue reset is divided into several stages. 1. notify device queue reset 2. vring release 3. attach new vring 4. notify device queue re-enable After the first step is completed, the vring reset operation can be performed. If the newly set vring num does not change, then just reset the vq related value. Otherwise, the vring will be released and the vring will be reallocated. And the vring will be attached to the vq. If this process fails, the function will exit, and the state of the vq will be the vring release state. You can call this function again to reallocate the vring. In addition, vring_align, may_reduce_num are necessary for reallocating vring, so they are retained when creating vq. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_ring.c | 69 ++++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index e0422c04c903..148fb1fd3d5a 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -158,6 +158,12 @@ struct vring_virtqueue { /* DMA address and size information */ dma_addr_t queue_dma_addr; size_t queue_size_in_bytes; + + /* The parameters for creating vrings are reserved for + * creating new vrings when enabling reset queue. + */ + u32 vring_align; + bool may_reduce_num; } split; /* Available for packed ring */ @@ -217,6 +223,12 @@ struct vring_virtqueue { #endif }; +static void vring_free(struct virtqueue *vq); +static void __vring_virtqueue_init_split(struct vring_virtqueue *vq, + struct virtio_device *vdev); +static int __vring_virtqueue_attach_split(struct vring_virtqueue *vq, + struct virtio_device *vdev, + struct vring vring); /* * Helpers. @@ -1012,6 +1024,8 @@ static struct virtqueue *vring_create_virtqueue_split( return NULL; } + to_vvq(vq)->split.vring_align = vring_align; + to_vvq(vq)->split.may_reduce_num = may_reduce_num; to_vvq(vq)->split.queue_dma_addr = vring.dma_addr; to_vvq(vq)->split.queue_size_in_bytes = vring.queue_size_in_bytes; to_vvq(vq)->we_own_ring = true; @@ -1019,6 +1033,59 @@ static struct virtqueue *vring_create_virtqueue_split( return vq; } +static int virtqueue_reset_vring_split(struct virtqueue *_vq, u32 num) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + struct virtio_device *vdev = _vq->vdev; + struct vring_split vring; + int err; + + if (num > _vq->num_max) + return -E2BIG; + + switch (vq->vq.reset) { + case VIRTIO_VQ_RESET_STEP_NONE: + return -ENOENT; + + case VIRTIO_VQ_RESET_STEP_VRING_ATTACH: + case VIRTIO_VQ_RESET_STEP_DEVICE: + if (vq->split.vring.num == num || !num) + break; + + vring_free(_vq); + + fallthrough; + + case VIRTIO_VQ_RESET_STEP_VRING_RELEASE: + if (!num) + num = vq->split.vring.num; + + err = vring_create_vring_split(&vring, vdev, + vq->split.vring_align, + vq->weak_barriers, + vq->split.may_reduce_num, num); + if (err) + return -ENOMEM; + + err = __vring_virtqueue_attach_split(vq, vdev, vring.vring); + if (err) { + vring_free_queue(vdev, vring.queue_size_in_bytes, + vring.queue, + vring.dma_addr); + return -ENOMEM; + } + + vq->split.queue_dma_addr = vring.dma_addr; + vq->split.queue_size_in_bytes = vring.queue_size_in_bytes; + } + + __vring_virtqueue_init_split(vq, vdev); + vq->we_own_ring = true; + vq->vq.reset = VIRTIO_VQ_RESET_STEP_VRING_ATTACH; + + return 0; +} + /* * Packed ring specific functions - *_packed(). @@ -2317,6 +2384,8 @@ static int __vring_virtqueue_attach_split(struct vring_virtqueue *vq, static void __vring_virtqueue_init_split(struct vring_virtqueue *vq, struct virtio_device *vdev) { + vq->vq.reset = VIRTIO_VQ_RESET_STEP_NONE; + vq->packed_ring = false; vq->we_own_ring = false; vq->broken = false; -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 10/26] virtio_ring: packed: implement virtqueue_reset_vring_packed()
virtio ring supports reset. Queue reset is divided into several stages. 1. notify device queue reset 2. vring release 3. attach new vring 4. notify device queue re-enable After the first step is completed, the vring reset operation can be performed. If the newly set vring num does not change, then just reset the vq related value. Otherwise, the vring will be released and the vring will be reallocated. And the vring will be attached to the vq. If this process fails, the function will exit, and the state of the vq will be the vring release state. You can call this function again to reallocate the vring. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_ring.c | 46 ++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 148fb1fd3d5a..5afcbabcfb1e 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -1886,6 +1886,8 @@ static int vring_virtqueue_attach_packed(struct vring_virtqueue *vq, static void vring_virtqueue_init_packed(struct vring_virtqueue *vq, struct virtio_device *vdev) { + vq->vq.reset = VIRTIO_VQ_RESET_STEP_NONE; + vq->we_own_ring = true; vq->broken = false; vq->last_used_idx = 0; @@ -1969,6 +1971,50 @@ static struct virtqueue *vring_create_virtqueue_packed( return NULL; } +static int virtqueue_reset_vring_packed(struct virtqueue *_vq, u32 num) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + struct virtio_device *vdev = _vq->vdev; + struct vring_packed vring; + int err; + + if (num > _vq->num_max) + return -E2BIG; + + switch (vq->vq.reset) { + case VIRTIO_VQ_RESET_STEP_NONE: + return -ENOENT; + + case VIRTIO_VQ_RESET_STEP_VRING_ATTACH: + case VIRTIO_VQ_RESET_STEP_DEVICE: + if (vq->packed.vring.num == num || !num) + break; + + vring_free(_vq); + + fallthrough; + + case VIRTIO_VQ_RESET_STEP_VRING_RELEASE: + if (!num) + num = vq->packed.vring.num; + + err = vring_create_vring_packed(&vring, vdev, num); + if (err) + return -ENOMEM; + + err = vring_virtqueue_attach_packed(vq, &vring, vdev); + if (err) { + vring_free_vring_packed(&vring, vdev); + return -ENOMEM; + } + } + + vring_virtqueue_init_packed(vq, vdev); + vq->vq.reset = VIRTIO_VQ_RESET_STEP_VRING_ATTACH; + + return 0; +} + /* * Generic functions and exported symbols. -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 11/26] virtio_ring: introduce virtqueue_reset_vring()
Introduce virtqueue_reset_vring() to implement the reset of vring during the reset process. If num is equal to 0 or equal to the original ring num, the original vring will be used directly. The vring will not be reallocated. Otherwise, the original vring will be released, and the vring will be re-allocated based on num. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_ring.c | 30 ++++++++++++++++++++++++++++++ include/linux/virtio.h | 2 ++ 2 files changed, 32 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 5afcbabcfb1e..bbff9ba53f80 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2534,6 +2534,36 @@ struct virtqueue *vring_create_virtqueue( } EXPORT_SYMBOL_GPL(vring_create_virtqueue); +/** + * virtqueue_reset_vring - reset the vring of vq + * @vq: the struct virtqueue we're talking about. + * @num: new ring num + * + * If num is equal to 0 or equal to the original ring num, the original vring + * will be used directly. The vring will not be reallocated. Otherwise, the + * original vring will be released, and the vring will be re-allocated based on + * num. + * + * This function must be called after virtio_reset_vq(). For more information on + * vq reset see the description of virtio_reset_vq(). + * + * + * Caller must ensure we don't call this with other virtqueue operations + * at the same time (except where noted). + * + * Returns zero or a negative error. + */ +int virtqueue_reset_vring(struct virtqueue *vq, u32 num) +{ + struct virtio_device *vdev = vq->vdev; + + if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) + return virtqueue_reset_vring_packed(vq, num); + + return virtqueue_reset_vring_split(vq, num); +} +EXPORT_SYMBOL_GPL(virtqueue_reset_vring); + /* Only available for split ring */ struct virtqueue *vring_new_virtqueue(unsigned int index, unsigned int num, diff --git a/include/linux/virtio.h b/include/linux/virtio.h index e3714e6db330..7bf29f9e7491 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -99,6 +99,8 @@ dma_addr_t virtqueue_get_desc_addr(struct virtqueue *vq); dma_addr_t virtqueue_get_avail_addr(struct virtqueue *vq); dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq); +int virtqueue_reset_vring(struct virtqueue *vq, u32 num); + /** * virtio_device - representation of a device using virtio * @index: unique position on the virtio bus -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 12/26] virtio_ring: update the document of the virtqueue_detach_unused_buf for queue reset
Added documentation for virtqueue_detach_unused_buf, allowing it to be called on queue reset. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_ring.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index bbff9ba53f80..f388be7562cd 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2357,8 +2357,8 @@ EXPORT_SYMBOL_GPL(virtqueue_enable_cb_delayed); * @_vq: the struct virtqueue we're talking about. * * Returns NULL or the "data" token handed to virtqueue_add_*(). - * This is not valid on an active queue; it is useful only for device - * shutdown. + * This is not valid on an active queue; it is useful for device + * shutdown or the reset queue. */ void *virtqueue_detach_unused_buf(struct virtqueue *_vq) { -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 13/26] virtio: queue_reset: struct virtio_config_ops add callbacks for queue_reset
Performing reset on a queue is divided into four steps: 1. reset_vq() - notify the device to reset the queue 2. virtqueue_detach_unused_buf() - recycle the buffer submitted 3. virtqueue_reset_vring() - reset the vring (may re-alloc) 4. enable_reset_vq() - mmap vring to device, and enable the queue So add two callbacks reset_vq, enable_reset_vq to struct virtio_config_ops. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- include/linux/virtio_config.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index 4d107ad31149..d51906b1389f 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -74,6 +74,15 @@ struct virtio_shm_region { * @set_vq_affinity: set the affinity for a virtqueue (optional). * @get_vq_affinity: get the affinity for a virtqueue (optional). * @get_shm_region: get a shared memory region based on the index. + * @reset_vq: reset a queue individually (optional). + * vq: the virtqueue + * Returns 0 on success or error status + * Caller should guarantee that the vring is not accessed by any functions + * of virtqueue. + * @enable_reset_vq: enable a reset queue + * vq: the virtqueue + * Returns 0 on success or error status + * If reset_vq is set, then enable_reset_vq must also be set. */ typedef void vq_callback_t(struct virtqueue *); struct virtio_config_ops { @@ -100,6 +109,8 @@ struct virtio_config_ops { int index); bool (*get_shm_region)(struct virtio_device *vdev, struct virtio_shm_region *region, u8 id); + int (*reset_vq)(struct virtqueue *vq); + int (*enable_reset_vq)(struct virtqueue *vq); }; /* If driver didn't advertise the feature, it will never appear. */ -- 2.31.0
Add helper for virtio queue reset. * virtio_reset_vq(): reset a queue individually * virtio_enable_resetq(): enable a reset queue Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- include/linux/virtio_config.h | 40 +++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index d51906b1389f..0b81fbe17c85 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -230,6 +230,46 @@ int virtio_find_vqs_ctx(struct virtio_device *vdev, unsigned nvqs, desc); } +/** + * virtio_reset_vq - reset a queue individually + * @vq: the virtqueue + * + * returns 0 on success or error status + * + * The api process of reset under normal circumstances: + * 1. virtio_reset_vq() - notify the device to reset the queue + * 2. virtqueue_detach_unused_buf() - recycle the buffer submitted + * 3. virtqueue_reset_vring() - reset the vring (may re-alloc) + * 4. virtio_enable_resetq() - mmap vring to device, and enable the queue + * + * Caller should guarantee that the vring is not accessed by any functions + * of virtqueue. + */ +static inline +int virtio_reset_vq(struct virtqueue *vq) +{ + if (!vq->vdev->config->reset_vq) + return -ENOENT; + + return vq->vdev->config->reset_vq(vq); +} + +/** + * virtio_enable_resetq - enable a reset queue + * @vq: the virtqueue + * + * returns 0 on success or error status + * + */ +static inline +int virtio_enable_resetq(struct virtqueue *vq) +{ + if (!vq->vdev->config->enable_reset_vq) + return -ENOENT; + + return vq->vdev->config->enable_reset_vq(vq); +} + /** * virtio_device_ready - enable vq use in probe function * @vdev: the device -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 15/26] virtio_pci: queue_reset: update struct virtio_pci_common_cfg and option functions
Add queue_reset in virtio_pci_common_cfg, and add related operation functions. For not breaks uABI, add a new struct virtio_pci_common_cfg_reset. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_pci_modern_dev.c | 36 ++++++++++++++++++++++++++ include/linux/virtio_pci_modern.h | 2 ++ include/uapi/linux/virtio_pci.h | 7 +++++ 3 files changed, 45 insertions(+) diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c index e8b3ff2b9fbc..8c74b00bc511 100644 --- a/drivers/virtio/virtio_pci_modern_dev.c +++ b/drivers/virtio/virtio_pci_modern_dev.c @@ -3,6 +3,7 @@ #include <linux/virtio_pci_modern.h> #include <linux/module.h> #include <linux/pci.h> +#include <linux/delay.h> /* * vp_modern_map_capability - map a part of virtio pci capability @@ -463,6 +464,41 @@ void vp_modern_set_status(struct virtio_pci_modern_device *mdev, } EXPORT_SYMBOL_GPL(vp_modern_set_status); +/* + * vp_modern_get_queue_reset - get the queue reset status + * @mdev: the modern virtio-pci device + * @index: queue index + */ +int vp_modern_get_queue_reset(struct virtio_pci_modern_device *mdev, u16 index) +{ + struct virtio_pci_common_cfg_reset __iomem *cfg; + + cfg = (struct virtio_pci_common_cfg_reset __iomem *)mdev->common; + + vp_iowrite16(index, &cfg->cfg.queue_select); + return vp_ioread16(&cfg->queue_reset); +} +EXPORT_SYMBOL_GPL(vp_modern_get_queue_reset); + +/* + * vp_modern_set_queue_reset - reset the queue + * @mdev: the modern virtio-pci device + * @index: queue index + */ +void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index) +{ + struct virtio_pci_common_cfg_reset __iomem *cfg; + + cfg = (struct virtio_pci_common_cfg_reset __iomem *)mdev->common; + + vp_iowrite16(index, &cfg->cfg.queue_select); + vp_iowrite16(1, &cfg->queue_reset); + + while (vp_ioread16(&cfg->queue_reset) != 1) + msleep(1); +} +EXPORT_SYMBOL_GPL(vp_modern_set_queue_reset); + /* * vp_modern_queue_vector - set the MSIX vector for a specific virtqueue * @mdev: the modern virtio-pci device diff --git a/include/linux/virtio_pci_modern.h b/include/linux/virtio_pci_modern.h index eb2bd9b4077d..cc4154dd7b28 100644 --- a/include/linux/virtio_pci_modern.h +++ b/include/linux/virtio_pci_modern.h @@ -106,4 +106,6 @@ void __iomem * vp_modern_map_vq_notify(struct virtio_pci_modern_device *mdev, u16 index, resource_size_t *pa); int vp_modern_probe(struct virtio_pci_modern_device *mdev); void vp_modern_remove(struct virtio_pci_modern_device *mdev); +int vp_modern_get_queue_reset(struct virtio_pci_modern_device *mdev, u16 index); +void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index); #endif diff --git a/include/uapi/linux/virtio_pci.h b/include/uapi/linux/virtio_pci.h index 22bec9bd0dfc..d9462efd6ce8 100644 --- a/include/uapi/linux/virtio_pci.h +++ b/include/uapi/linux/virtio_pci.h @@ -173,6 +173,13 @@ struct virtio_pci_common_cfg_notify { __le16 padding; }; +struct virtio_pci_common_cfg_reset { + struct virtio_pci_common_cfg cfg; + + __le16 queue_notify_data; /* read-write */ + __le16 queue_reset; /* read-write */ +}; + /* Fields in VIRTIO_PCI_CAP_PCI_CFG: */ struct virtio_pci_cfg_cap { struct virtio_pci_cap cap; -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 16/26] virtio_pci: queue_reset: extract the logic of active vq for modern pci
Introduce vp_active_vq() to configure vring to backend after vq attach vring. And configure vq vector if necessary. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_pci_modern.c | 46 ++++++++++++++++++------------ 1 file changed, 28 insertions(+), 18 deletions(-) diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index 86d301f272b8..49a4493732cf 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -176,6 +176,29 @@ static void vp_reset(struct virtio_device *vdev) vp_disable_cbs(vdev); } +static int vp_active_vq(struct virtqueue *vq, u16 msix_vec) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev); + struct virtio_pci_modern_device *mdev = &vp_dev->mdev; + unsigned long index; + + index = vq->index; + + /* activate the queue */ + vp_modern_set_queue_size(mdev, index, virtqueue_get_vring_size(vq)); + vp_modern_queue_address(mdev, index, virtqueue_get_desc_addr(vq), + virtqueue_get_avail_addr(vq), + virtqueue_get_used_addr(vq)); + + if (msix_vec != VIRTIO_MSI_NO_VECTOR) { + msix_vec = vp_modern_queue_vector(mdev, index, msix_vec); + if (msix_vec == VIRTIO_MSI_NO_VECTOR) + return -EBUSY; + } + + return 0; +} + static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector) { return vp_modern_config_vector(&vp_dev->mdev, vector); @@ -220,32 +243,19 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev, vq->num_max = num; - /* activate the queue */ - vp_modern_set_queue_size(mdev, index, virtqueue_get_vring_size(vq)); - vp_modern_queue_address(mdev, index, virtqueue_get_desc_addr(vq), - virtqueue_get_avail_addr(vq), - virtqueue_get_used_addr(vq)); + err = vp_active_vq(vq, msix_vec); + if (err) + goto err; vq->priv = (void __force *)vp_modern_map_vq_notify(mdev, index, NULL); if (!vq->priv) { err = -ENOMEM; - goto err_map_notify; - } - - if (msix_vec != VIRTIO_MSI_NO_VECTOR) { - msix_vec = vp_modern_queue_vector(mdev, index, msix_vec); - if (msix_vec == VIRTIO_MSI_NO_VECTOR) { - err = -EBUSY; - goto err_assign_vector; - } + goto err; } return vq; -err_assign_vector: - if (!mdev->notify_base) - pci_iounmap(mdev->pci_dev, (void __iomem __force *)vq->priv); -err_map_notify: +err: vring_del_virtqueue(vq); return ERR_PTR(err); } -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 17/26] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET
This patch implements virtio pci support for QUEUE RESET. Performing reset on a queue is divided into these steps: 1. virtio_reset_vq() - notify the device to reset the queue 2. virtqueue_detach_unused_buf() - recycle the buffer submitted 3. virtqueue_reset_vring() - reset the vring (may re-alloc) 4. virtio_enable_resetq() - mmap vring to device, and enable the queue This patch implements virtio_reset_vq(), virtio_enable_resetq() in the pci scenario. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_pci_common.c | 8 +-- drivers/virtio/virtio_pci_modern.c | 83 ++++++++++++++++++++++++++++++ 2 files changed, 88 insertions(+), 3 deletions(-) diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c index fdbde1db5ec5..863d3a8a0956 100644 --- a/drivers/virtio/virtio_pci_common.c +++ b/drivers/virtio/virtio_pci_common.c @@ -248,9 +248,11 @@ static void vp_del_vq(struct virtqueue *vq) struct virtio_pci_vq_info *info = vp_dev->vqs[vq->index]; unsigned long flags; - spin_lock_irqsave(&vp_dev->lock, flags); - list_del(&info->node); - spin_unlock_irqrestore(&vp_dev->lock, flags); + if (!vq->reset) { + spin_lock_irqsave(&vp_dev->lock, flags); + list_del(&info->node); + spin_unlock_irqrestore(&vp_dev->lock, flags); + } vp_dev->del_vq(info); kfree(info); diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index 49a4493732cf..3c67d3607802 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -34,6 +34,9 @@ static void vp_transport_features(struct virtio_device *vdev, u64 features) if ((features & BIT_ULL(VIRTIO_F_SR_IOV)) && pci_find_ext_capability(pci_dev, PCI_EXT_CAP_ID_SRIOV)) __virtio_set_bit(vdev, VIRTIO_F_SR_IOV); + + if (features & BIT_ULL(VIRTIO_F_RING_RESET)) + __virtio_set_bit(vdev, VIRTIO_F_RING_RESET); } /* virtio config->finalize_features() implementation */ @@ -199,6 +202,82 @@ static int vp_active_vq(struct virtqueue *vq, u16 msix_vec) return 0; } +static int vp_modern_reset_vq(struct virtqueue *vq) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev); + struct virtio_pci_modern_device *mdev = &vp_dev->mdev; + struct virtio_pci_vq_info *info; + unsigned long flags; + unsigned int irq; + + if (!virtio_has_feature(vq->vdev, VIRTIO_F_RING_RESET)) + return -ENOENT; + + vp_modern_set_queue_reset(mdev, vq->index); + + info = vp_dev->vqs[vq->index]; + + /* delete vq from irq handler */ + spin_lock_irqsave(&vp_dev->lock, flags); + list_del(&info->node); + spin_unlock_irqrestore(&vp_dev->lock, flags); + + INIT_LIST_HEAD(&info->node); + + vq->reset = VIRTIO_VQ_RESET_STEP_DEVICE; + + /* sync irq callback. */ + if (vp_dev->intx_enabled) { + irq = vp_dev->pci_dev->irq; + + } else { + if (info->msix_vector == VIRTIO_MSI_NO_VECTOR) + return 0; + + irq = pci_irq_vector(vp_dev->pci_dev, info->msix_vector); + } + + synchronize_irq(irq); + + return 0; +} + +static int vp_modern_enable_reset_vq(struct virtqueue *vq) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev); + struct virtio_pci_modern_device *mdev = &vp_dev->mdev; + struct virtio_pci_vq_info *info; + unsigned long flags, index; + int err; + + if (vq->reset != VIRTIO_VQ_RESET_STEP_VRING_ATTACH) + return -EBUSY; + + index = vq->index; + info = vp_dev->vqs[index]; + + /* check queue reset status */ + if (vp_modern_get_queue_reset(mdev, index) != 1) + return -EBUSY; + + err = vp_active_vq(vq, info->msix_vector); + if (err) + return err; + + if (vq->callback) { + spin_lock_irqsave(&vp_dev->lock, flags); + list_add(&info->node, &vp_dev->virtqueues); + spin_unlock_irqrestore(&vp_dev->lock, flags); + } else { + INIT_LIST_HEAD(&info->node); + } + + vp_modern_set_queue_enable(&vp_dev->mdev, index, true); + vq->reset = VIRTIO_VQ_RESET_STEP_NONE; + + return 0; +} + static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector) { return vp_modern_config_vector(&vp_dev->mdev, vector); @@ -407,6 +486,8 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = { .set_vq_affinity = vp_set_vq_affinity, .get_vq_affinity = vp_get_vq_affinity, .get_shm_region = vp_get_shm_region, + .reset_vq = vp_modern_reset_vq, + .enable_reset_vq = vp_modern_enable_reset_vq, }; static const struct virtio_config_ops virtio_pci_config_ops = { @@ -425,6 +506,8 @@ static const struct virtio_config_ops virtio_pci_config_ops = { .set_vq_affinity = vp_set_vq_affinity, .get_vq_affinity = vp_get_vq_affinity, .get_shm_region = vp_get_shm_region, + .reset_vq = vp_modern_reset_vq, + .enable_reset_vq = vp_modern_enable_reset_vq, }; /* the PCI probing function */ -- 2.31.0
find_vqs() adds a new parameter sizes to specify the size of each vq vring. 0 means use the maximum size supported by the backend. In the split scenario, the meaning of size is the largest size, because it may be limited by memory, the virtio core will try a smaller size. And the size is power of 2. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- arch/um/drivers/virtio_uml.c | 2 +- drivers/platform/mellanox/mlxbf-tmfifo.c | 3 ++- drivers/remoteproc/remoteproc_virtio.c | 2 +- drivers/s390/virtio/virtio_ccw.c | 2 +- drivers/virtio/virtio_mmio.c | 2 +- drivers/virtio/virtio_pci_common.c | 2 +- drivers/virtio/virtio_pci_common.h | 2 +- drivers/virtio/virtio_pci_modern.c | 5 +++-- drivers/virtio/virtio_vdpa.c | 2 +- include/linux/virtio_config.h | 11 +++++++---- 10 files changed, 19 insertions(+), 14 deletions(-) diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c index ba562d68dc04..055b91ccbe8a 100644 --- a/arch/um/drivers/virtio_uml.c +++ b/arch/um/drivers/virtio_uml.c @@ -998,7 +998,7 @@ static struct virtqueue *vu_setup_vq(struct virtio_device *vdev, static int vu_find_vqs(struct virtio_device *vdev, unsigned nvqs, struct virtqueue *vqs[], vq_callback_t *callbacks[], const char * const names[], const bool *ctx, - struct irq_affinity *desc) + struct irq_affinity *desc, u32 sizes[]) { struct virtio_uml_device *vu_dev = to_virtio_uml_device(vdev); int i, queue_idx = 0, rc; diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c index 38800e86ed8a..aea7aa218b22 100644 --- a/drivers/platform/mellanox/mlxbf-tmfifo.c +++ b/drivers/platform/mellanox/mlxbf-tmfifo.c @@ -929,7 +929,8 @@ static int mlxbf_tmfifo_virtio_find_vqs(struct virtio_device *vdev, vq_callback_t *callbacks[], const char * const names[], const bool *ctx, - struct irq_affinity *desc) + struct irq_affinity *desc, + u32 sizes[]) { struct mlxbf_tmfifo_vdev *tm_vdev = mlxbf_vdev_to_tmfifo(vdev); struct mlxbf_tmfifo_vring *vring; diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c index 70ab496d0431..3a167bec5b09 100644 --- a/drivers/remoteproc/remoteproc_virtio.c +++ b/drivers/remoteproc/remoteproc_virtio.c @@ -157,7 +157,7 @@ static int rproc_virtio_find_vqs(struct virtio_device *vdev, unsigned int nvqs, vq_callback_t *callbacks[], const char * const names[], const bool * ctx, - struct irq_affinity *desc) + struct irq_affinity *desc, u32 sizes[]) { int i, ret, queue_idx = 0; diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c index d35e7a3f7067..b74e08c71534 100644 --- a/drivers/s390/virtio/virtio_ccw.c +++ b/drivers/s390/virtio/virtio_ccw.c @@ -632,7 +632,7 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, unsigned nvqs, vq_callback_t *callbacks[], const char * const names[], const bool *ctx, - struct irq_affinity *desc) + struct irq_affinity *desc, u32 sizes[]) { struct virtio_ccw_device *vcdev = to_vc_device(vdev); unsigned long *indicatorp = NULL; diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index a41abc8051b9..55d575f6ef2d 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -462,7 +462,7 @@ static int vm_find_vqs(struct virtio_device *vdev, unsigned nvqs, vq_callback_t *callbacks[], const char * const names[], const bool *ctx, - struct irq_affinity *desc) + struct irq_affinity *desc, u32 sizes[]) { struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); int irq = platform_get_irq(vm_dev->pdev, 0); diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c index 863d3a8a0956..8e8fa7e5ad80 100644 --- a/drivers/virtio/virtio_pci_common.c +++ b/drivers/virtio/virtio_pci_common.c @@ -428,7 +428,7 @@ static int vp_find_vqs_intx(struct virtio_device *vdev, unsigned nvqs, int vp_find_vqs(struct virtio_device *vdev, unsigned nvqs, struct virtqueue *vqs[], vq_callback_t *callbacks[], const char * const names[], const bool *ctx, - struct irq_affinity *desc) + struct irq_affinity *desc, u32 sizes[]) { int err; diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h index 23f6c5c678d5..9dbf1d555dff 100644 --- a/drivers/virtio/virtio_pci_common.h +++ b/drivers/virtio/virtio_pci_common.h @@ -114,7 +114,7 @@ void vp_del_vqs(struct virtio_device *vdev); int vp_find_vqs(struct virtio_device *vdev, unsigned nvqs, struct virtqueue *vqs[], vq_callback_t *callbacks[], const char * const names[], const bool *ctx, - struct irq_affinity *desc); + struct irq_affinity *desc, u32 sizes[]); const char *vp_bus_name(struct virtio_device *vdev); /* Setup the affinity for a virtqueue: diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index 3c67d3607802..342795175c29 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -343,11 +343,12 @@ static int vp_modern_find_vqs(struct virtio_device *vdev, unsigned nvqs, struct virtqueue *vqs[], vq_callback_t *callbacks[], const char * const names[], const bool *ctx, - struct irq_affinity *desc) + struct irq_affinity *desc, u32 sizes[]) { struct virtio_pci_device *vp_dev = to_vp_device(vdev); struct virtqueue *vq; - int rc = vp_find_vqs(vdev, nvqs, vqs, callbacks, names, ctx, desc); + int rc = vp_find_vqs(vdev, nvqs, vqs, callbacks, names, ctx, desc, + sizes); if (rc) return rc; diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c index 7767a7f0119b..ee08d01ee8b1 100644 --- a/drivers/virtio/virtio_vdpa.c +++ b/drivers/virtio/virtio_vdpa.c @@ -268,7 +268,7 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned nvqs, vq_callback_t *callbacks[], const char * const names[], const bool *ctx, - struct irq_affinity *desc) + struct irq_affinity *desc, u32 sizes[]) { struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); struct vdpa_device *vdpa = vd_get_vdpa(vdev); diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index 0b81fbe17c85..5157524d8036 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -57,6 +57,7 @@ struct virtio_shm_region { * include a NULL entry for vqs that do not need a callback * names: array of virtqueue names (mainly for debugging) * include a NULL entry for vqs unused by driver + * sizes: array of virtqueue sizes * Returns 0 on success or error status * @del_vqs: free virtqueues found by find_vqs(). * @get_features: get the array of feature bits for this device. @@ -98,7 +99,8 @@ struct virtio_config_ops { int (*find_vqs)(struct virtio_device *, unsigned nvqs, struct virtqueue *vqs[], vq_callback_t *callbacks[], const char * const names[], const bool *ctx, - struct irq_affinity *desc); + struct irq_affinity *desc, + u32 sizes[]); void (*del_vqs)(struct virtio_device *); u64 (*get_features)(struct virtio_device *vdev); int (*finalize_features)(struct virtio_device *vdev); @@ -205,7 +207,7 @@ struct virtqueue *virtio_find_single_vq(struct virtio_device *vdev, const char *names[] = { n }; struct virtqueue *vq; int err = vdev->config->find_vqs(vdev, 1, &vq, callbacks, names, NULL, - NULL); + NULL, NULL); if (err < 0) return ERR_PTR(err); return vq; @@ -217,7 +219,8 @@ int virtio_find_vqs(struct virtio_device *vdev, unsigned nvqs, const char * const names[], struct irq_affinity *desc) { - return vdev->config->find_vqs(vdev, nvqs, vqs, callbacks, names, NULL, desc); + return vdev->config->find_vqs(vdev, nvqs, vqs, callbacks, names, NULL, + desc, NULL); } static inline @@ -227,7 +230,7 @@ int virtio_find_vqs_ctx(struct virtio_device *vdev, unsigned nvqs, struct irq_affinity *desc) { return vdev->config->find_vqs(vdev, nvqs, vqs, callbacks, names, ctx, - desc); + desc, NULL); } /** -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 19/26] virtio_pci: support the arg sizes of find_vqs()
Virtio PCI supports new parameter sizes of find_vqs(). Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_pci_common.c | 18 ++++++++++-------- drivers/virtio/virtio_pci_common.h | 1 + drivers/virtio/virtio_pci_legacy.c | 6 +++++- drivers/virtio/virtio_pci_modern.c | 10 +++++++--- 4 files changed, 23 insertions(+), 12 deletions(-) diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c index 8e8fa7e5ad80..1faf65325060 100644 --- a/drivers/virtio/virtio_pci_common.c +++ b/drivers/virtio/virtio_pci_common.c @@ -208,6 +208,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors, static struct virtqueue *vp_setup_vq(struct virtio_device *vdev, unsigned index, void (*callback)(struct virtqueue *vq), const char *name, + u32 size, bool ctx, u16 msix_vec) { @@ -221,7 +222,7 @@ static struct virtqueue *vp_setup_vq(struct virtio_device *vdev, unsigned index, return ERR_PTR(-ENOMEM); vq = vp_dev->setup_vq(vp_dev, info, index, callback, name, ctx, - msix_vec); + size, msix_vec); if (IS_ERR(vq)) goto out_info; @@ -314,7 +315,7 @@ void vp_del_vqs(struct virtio_device *vdev) static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs, struct virtqueue *vqs[], vq_callback_t *callbacks[], - const char * const names[], bool per_vq_vectors, + const char * const names[], u32 sizes[], bool per_vq_vectors, const bool *ctx, struct irq_affinity *desc) { @@ -357,8 +358,8 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs, else msix_vec = VP_MSIX_VQ_VECTOR; vqs[i] = vp_setup_vq(vdev, queue_idx++, callbacks[i], names[i], - ctx ? ctx[i] : false, - msix_vec); + sizes ? sizes[i] : 0, + ctx ? ctx[i] : false, msix_vec); if (IS_ERR(vqs[i])) { err = PTR_ERR(vqs[i]); goto error_find; @@ -388,7 +389,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs, static int vp_find_vqs_intx(struct virtio_device *vdev, unsigned nvqs, struct virtqueue *vqs[], vq_callback_t *callbacks[], - const char * const names[], const bool *ctx) + const char * const names[], u32 sizes[], const bool *ctx) { struct virtio_pci_device *vp_dev = to_vp_device(vdev); int i, err, queue_idx = 0; @@ -410,6 +411,7 @@ static int vp_find_vqs_intx(struct virtio_device *vdev, unsigned nvqs, continue; } vqs[i] = vp_setup_vq(vdev, queue_idx++, callbacks[i], names[i], + sizes ? sizes[i] : 0, ctx ? ctx[i] : false, VIRTIO_MSI_NO_VECTOR); if (IS_ERR(vqs[i])) { @@ -433,15 +435,15 @@ int vp_find_vqs(struct virtio_device *vdev, unsigned nvqs, int err; /* Try MSI-X with one vector per queue. */ - err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names, true, ctx, desc); + err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names, sizes, true, ctx, desc); if (!err) return 0; /* Fallback: MSI-X with one vector for config, one shared for queues. */ - err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names, false, ctx, desc); + err = vp_find_vqs_msix(vdev, nvqs, vqs, callbacks, names, sizes, false, ctx, desc); if (!err) return 0; /* Finally fall back to regular interrupts. */ - return vp_find_vqs_intx(vdev, nvqs, vqs, callbacks, names, ctx); + return vp_find_vqs_intx(vdev, nvqs, vqs, callbacks, names, sizes, ctx); } const char *vp_bus_name(struct virtio_device *vdev) diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h index 9dbf1d555dff..a15ac5570ddd 100644 --- a/drivers/virtio/virtio_pci_common.h +++ b/drivers/virtio/virtio_pci_common.h @@ -82,6 +82,7 @@ struct virtio_pci_device { void (*callback)(struct virtqueue *vq), const char *name, bool ctx, + u32 size, u16 msix_vec); void (*del_vq)(struct virtio_pci_vq_info *info); diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c index b68934fe6b5d..efa98d2debe0 100644 --- a/drivers/virtio/virtio_pci_legacy.c +++ b/drivers/virtio/virtio_pci_legacy.c @@ -113,6 +113,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev, void (*callback)(struct virtqueue *vq), const char *name, bool ctx, + u32 size, u16 msix_vec) { struct virtqueue *vq; @@ -125,10 +126,13 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev, if (!num || vp_legacy_get_queue_enable(&vp_dev->ldev, index)) return ERR_PTR(-ENOENT); + if (!size || size > num) + size = num; + info->msix_vector = msix_vec; /* create the vring */ - vq = vring_create_virtqueue(index, num, + vq = vring_create_virtqueue(index, size, VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev, true, false, ctx, vp_notify, callback, name); diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c index 342795175c29..0e17e0df6a8a 100644 --- a/drivers/virtio/virtio_pci_modern.c +++ b/drivers/virtio/virtio_pci_modern.c @@ -289,6 +289,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev, void (*callback)(struct virtqueue *vq), const char *name, bool ctx, + u32 size, u16 msix_vec) { @@ -305,15 +306,18 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev, if (!num || vp_modern_get_queue_enable(mdev, index)) return ERR_PTR(-ENOENT); - if (num & (num - 1)) { - dev_warn(&vp_dev->pci_dev->dev, "bad queue size %u", num); + if (!size || size > num) + size = num; + + if (size & (size - 1)) { + dev_warn(&vp_dev->pci_dev->dev, "bad queue size %u", size); return ERR_PTR(-EINVAL); } info->msix_vector = msix_vec; /* create the vring */ - vq = vring_create_virtqueue(index, num, + vq = vring_create_virtqueue(index, size, SMP_CACHE_BYTES, &vp_dev->vdev, true, true, ctx, vp_notify, callback, name); -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 20/26] virtio_mmio: support the arg sizes of find_vqs()
Virtio MMIO support the new parameter sizes of find_vqs(). Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/virtio/virtio_mmio.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index 55d575f6ef2d..4d7cd8881282 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -347,7 +347,7 @@ static void vm_del_vqs(struct virtio_device *vdev) static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index, void (*callback)(struct virtqueue *vq), - const char *name, bool ctx) + const char *name, u32 size, bool ctx) { struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); struct virtio_mmio_vq_info *info; @@ -382,8 +382,11 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index, goto error_new_virtqueue; } + if (!size || size > num) + size = num; + /* Create the vring */ - vq = vring_create_virtqueue(index, num, VIRTIO_MMIO_VRING_ALIGN, vdev, + vq = vring_create_virtqueue(index, size, VIRTIO_MMIO_VRING_ALIGN, vdev, true, true, ctx, vm_notify, callback, name); if (!vq) { err = -ENOMEM; @@ -483,6 +486,7 @@ static int vm_find_vqs(struct virtio_device *vdev, unsigned nvqs, } vqs[i] = vm_setup_vq(vdev, queue_idx++, callbacks[i], names[i], + sizes ? sizes[i] : 0, ctx ? ctx[i] : false); if (IS_ERR(vqs[i])) { vm_del_vqs(vdev); -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 21/26] virtio: add helper virtio_find_vqs_ctx_size()
Introduce helper virtio_find_vqs_ctx_size() to call find_vqs and specify the maximum size of each vq ring. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- include/linux/virtio_config.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index 5157524d8036..921d8610db0c 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -233,6 +233,18 @@ int virtio_find_vqs_ctx(struct virtio_device *vdev, unsigned nvqs, desc, NULL); } +static inline +int virtio_find_vqs_ctx_size(struct virtio_device *vdev, u32 nvqs, + struct virtqueue *vqs[], + vq_callback_t *callbacks[], + const char * const names[], + const bool *ctx, struct irq_affinity *desc, + u32 sizes[]) +{ + return vdev->config->find_vqs(vdev, nvqs, vqs, callbacks, names, ctx, + desc, sizes); +} + /** * virtio_reset_vq - reset a queue individually * @vq: the virtqueue -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 22/26] virtio_net: get ringparam by virtqueue_get_vring_max_size()
Use virtqueue_get_vring_max_size() in virtnet_get_ringparam() to set tx,rx_max_pending. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index a801ea40908f..59b1ea82f5f0 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2177,10 +2177,10 @@ static void virtnet_get_ringparam(struct net_device *dev, { struct virtnet_info *vi = netdev_priv(dev); - ring->rx_max_pending = virtqueue_get_vring_size(vi->rq[0].vq); - ring->tx_max_pending = virtqueue_get_vring_size(vi->sq[0].vq); - ring->rx_pending = ring->rx_max_pending; - ring->tx_pending = ring->tx_max_pending; + ring->rx_max_pending = virtqueue_get_vring_max_size(vi->rq[0].vq); + ring->tx_max_pending = virtqueue_get_vring_max_size(vi->sq[0].vq); + ring->rx_pending = virtqueue_get_vring_size(vi->rq[0].vq); + ring->tx_pending = virtqueue_get_vring_size(vi->sq[0].vq); } -- 2.31.0
This patch separates two functions for freeing sq buf and rq buf from free_unused_bufs(). When supporting the enable/disable tx/rq queue in the future, it is necessary to support separate recovery of a sq buf or a rq buf. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 53 +++++++++++++++++++++++----------------- 1 file changed, 31 insertions(+), 22 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 59b1ea82f5f0..409a8e180918 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2804,36 +2804,45 @@ static void free_receive_page_frags(struct virtnet_info *vi) put_page(vi->rq[i].alloc_frag.page); } -static void free_unused_bufs(struct virtnet_info *vi) +static void virtnet_sq_free_unused_bufs(struct virtnet_info *vi, + struct send_queue *sq) { void *buf; - int i; - for (i = 0; i < vi->max_queue_pairs; i++) { - struct virtqueue *vq = vi->sq[i].vq; - while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { - if (!is_xdp_frame(buf)) - dev_kfree_skb(buf); - else - xdp_return_frame(ptr_to_xdp(buf)); - } + while ((buf = virtqueue_detach_unused_buf(sq->vq)) != NULL) { + if (!is_xdp_frame(buf)) + dev_kfree_skb(buf); + else + xdp_return_frame(ptr_to_xdp(buf)); } +} - for (i = 0; i < vi->max_queue_pairs; i++) { - struct virtqueue *vq = vi->rq[i].vq; - - while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { - if (vi->mergeable_rx_bufs) { - put_page(virt_to_head_page(buf)); - } else if (vi->big_packets) { - give_pages(&vi->rq[i], buf); - } else { - put_page(virt_to_head_page(buf)); - } - } +static void virtnet_rq_free_unused_bufs(struct virtnet_info *vi, + struct receive_queue *rq) +{ + void *buf; + + while ((buf = virtqueue_detach_unused_buf(rq->vq)) != NULL) { + if (vi->mergeable_rx_bufs) + put_page(virt_to_head_page(buf)); + else if (vi->big_packets) + give_pages(rq, buf); + else + put_page(virt_to_head_page(buf)); } } +static void free_unused_bufs(struct virtnet_info *vi) +{ + int i; + + for (i = 0; i < vi->max_queue_pairs; i++) + virtnet_sq_free_unused_bufs(vi, vi->sq + i); + + for (i = 0; i < vi->max_queue_pairs; i++) + virtnet_rq_free_unused_bufs(vi, vi->rq + i); +} + static void virtnet_del_vqs(struct virtnet_info *vi) { struct virtio_device *vdev = vi->vdev; -- 2.31.0
This patch implements the reset function of the rx, tx queues. Based on this function, it is possible to modify the ring num of the queue. And quickly recycle the buffer in the queue. In the process of the queue disable, in theory, as long as virtio supports queue reset, there will be no exceptions. However, in the process of the queue enable, there may be exceptions due to memory allocation. In this case, vq is not available, but we still have to execute napi_enable(). Because napi_disable is similar to a lock, napi_enable must be called after calling napi_disable. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 107 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 409a8e180918..ffff323dcef0 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -251,6 +251,11 @@ struct padded_vnet_hdr { char padding[4]; }; +static void virtnet_sq_free_unused_bufs(struct virtnet_info *vi, + struct send_queue *sq); +static void virtnet_rq_free_unused_bufs(struct virtnet_info *vi, + struct receive_queue *rq); + static bool is_xdp_frame(void *ptr) { return (unsigned long)ptr & VIRTIO_XDP_FLAG; @@ -1369,6 +1374,9 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi) { napi_enable(napi); + if (vq->reset) + return; + /* If all buffers were filled by other side before we napi_enabled, we * won't get another interrupt, so process any outstanding packets now. * Call local_bh_enable after to trigger softIRQ processing. @@ -1413,6 +1421,10 @@ static void refill_work(struct work_struct *work) struct receive_queue *rq = &vi->rq[i]; napi_disable(&rq->napi); + if (rq->vq->reset) { + virtnet_napi_enable(rq->vq, &rq->napi); + continue; + } still_empty = !try_fill_recv(vi, rq, GFP_KERNEL); virtnet_napi_enable(rq->vq, &rq->napi); @@ -1523,6 +1535,9 @@ static void virtnet_poll_cleantx(struct receive_queue *rq) if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index)) return; + if (sq->vq->reset) + return; + if (__netif_tx_trylock(txq)) { do { virtqueue_disable_cb(sq->vq); @@ -1769,6 +1784,98 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; } +static int virtnet_rx_vq_reset(struct virtnet_info *vi, + struct receive_queue *rq, u32 ring_num) +{ + int err; + + /* stop napi */ + napi_disable(&rq->napi); + + /* reset the queue */ + err = virtio_reset_vq(rq->vq); + if (err) + goto err; + + /* free bufs */ + virtnet_rq_free_unused_bufs(vi, rq); + + /* reset vring. */ + err = virtqueue_reset_vring(rq->vq, ring_num); + if (err) + goto err; + + /* enable reset queue */ + err = virtio_enable_resetq(rq->vq); + if (err) + goto err; + + /* fill recv */ + if (!try_fill_recv(vi, rq, GFP_KERNEL)) + schedule_delayed_work(&vi->refill, 0); + + /* enable napi */ + virtnet_napi_enable(rq->vq, &rq->napi); + return 0; + +err: + netdev_err(vi->dev, + "reset rx reset vq fail: rx queue index: %ld err: %d\n", + rq - vi->rq, err); + virtnet_napi_enable(rq->vq, &rq->napi); + return err; +} + +static int virtnet_tx_vq_reset(struct virtnet_info *vi, + struct send_queue *sq, u32 ring_num) +{ + struct netdev_queue *txq; + int err, qindex; + + qindex = sq - vi->sq; + + txq = netdev_get_tx_queue(vi->dev, qindex); + __netif_tx_lock_bh(txq); + + /* stop tx queue and napi */ + netif_stop_subqueue(vi->dev, qindex); + virtnet_napi_tx_disable(&sq->napi); + + __netif_tx_unlock_bh(txq); + + /* reset the queue */ + err = virtio_reset_vq(sq->vq); + if (err) { + netif_start_subqueue(vi->dev, qindex); + goto err; + } + + /* free bufs */ + virtnet_sq_free_unused_bufs(vi, sq); + + /* reset vring. */ + err = virtqueue_reset_vring(sq->vq, ring_num); + if (err) + goto err; + + /* enable reset queue */ + err = virtio_enable_resetq(sq->vq); + if (err) + goto err; + + /* start tx queue and napi */ + netif_start_subqueue(vi->dev, qindex); + virtnet_napi_tx_enable(vi, sq->vq, &sq->napi); + return 0; + +err: + netdev_err(vi->dev, + "reset tx reset vq fail: tx queue index: %ld err: %d\n", + sq - vi->sq, err); + virtnet_napi_tx_enable(vi, sq->vq, &sq->napi); + return err; +} + /* * Send command via the control virtqueue and check status. Commands * supported by the hypervisor, as indicated by feature bits, should -- 2.31.0
Xuan Zhuo
2022-Mar-08 12:35 UTC
[PATCH v7 25/26] virtio_net: set the default max ring size by find_vqs()
Use virtio_find_vqs_ctx_size() to specify the maximum ring size of tx, rx at the same time. | rx/tx ring size ------------------------------------------- speed == UNKNOWN or < 10G| 1024 speed < 40G | 4096 speed >= 40G | 8192 Call virtnet_update_settings() once before calling init_vqs() to update speed. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 42 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 38 insertions(+), 4 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index ffff323dcef0..f1bdc6ce21c3 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2977,6 +2977,29 @@ static unsigned int mergeable_min_buf_len(struct virtnet_info *vi, struct virtqu (unsigned int)GOOD_PACKET_LEN); } +static void virtnet_config_sizes(struct virtnet_info *vi, u32 *sizes) +{ + u32 i, rx_size, tx_size; + + if (vi->speed == SPEED_UNKNOWN || vi->speed < SPEED_10000) { + rx_size = 1024; + tx_size = 1024; + + } else if (vi->speed < SPEED_40000) { + rx_size = 1024 * 4; + tx_size = 1024 * 4; + + } else { + rx_size = 1024 * 8; + tx_size = 1024 * 8; + } + + for (i = 0; i < vi->max_queue_pairs; i++) { + sizes[rxq2vq(i)] = rx_size; + sizes[txq2vq(i)] = tx_size; + } +} + static int virtnet_find_vqs(struct virtnet_info *vi) { vq_callback_t **callbacks; @@ -2984,6 +3007,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi) int ret = -ENOMEM; int i, total_vqs; const char **names; + u32 *sizes; bool *ctx; /* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by @@ -3011,10 +3035,15 @@ static int virtnet_find_vqs(struct virtnet_info *vi) ctx = NULL; } + sizes = kmalloc_array(total_vqs, sizeof(*sizes), GFP_KERNEL); + if (!sizes) + goto err_sizes; + /* Parameters for control virtqueue, if any */ if (vi->has_cvq) { callbacks[total_vqs - 1] = NULL; names[total_vqs - 1] = "control"; + sizes[total_vqs - 1] = 0; } /* Allocate/initialize parameters for send/receive virtqueues */ @@ -3029,8 +3058,10 @@ static int virtnet_find_vqs(struct virtnet_info *vi) ctx[rxq2vq(i)] = true; } - ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks, - names, ctx, NULL); + virtnet_config_sizes(vi, sizes); + + ret = virtio_find_vqs_ctx_size(vi->vdev, total_vqs, vqs, callbacks, + names, ctx, NULL, sizes); if (ret) goto err_find; @@ -3050,6 +3081,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi) err_find: + kfree(sizes); +err_sizes: kfree(ctx); err_ctx: kfree(names); @@ -3368,6 +3401,9 @@ static int virtnet_probe(struct virtio_device *vdev) vi->curr_queue_pairs = num_online_cpus(); vi->max_queue_pairs = max_queue_pairs; + virtnet_init_settings(dev); + virtnet_update_settings(vi); + /* Allocate/initialize the rx/tx queues, and invoke find_vqs */ err = init_vqs(vi); if (err) @@ -3380,8 +3416,6 @@ static int virtnet_probe(struct virtio_device *vdev) netif_set_real_num_tx_queues(dev, vi->curr_queue_pairs); netif_set_real_num_rx_queues(dev, vi->curr_queue_pairs); - virtnet_init_settings(dev); - if (virtio_has_feature(vdev, VIRTIO_NET_F_STANDBY)) { vi->failover = net_failover_create(vi->dev); if (IS_ERR(vi->failover)) { -- 2.31.0
Support set_ringparam based on virtio queue reset. The rx,tx_pending required to be passed must be power of 2. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 47 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index f1bdc6ce21c3..1fa2d632a994 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2290,6 +2290,52 @@ static void virtnet_get_ringparam(struct net_device *dev, ring->tx_pending = virtqueue_get_vring_size(vi->sq[0].vq); } +static int virtnet_set_ringparam(struct net_device *dev, + struct ethtool_ringparam *ring, + struct kernel_ethtool_ringparam *kernel_ring, + struct netlink_ext_ack *extack) +{ + struct virtnet_info *vi = netdev_priv(dev); + u32 rx_pending, tx_pending; + struct receive_queue *rq; + struct send_queue *sq; + int i, err; + + if (ring->rx_mini_pending || ring->rx_jumbo_pending) + return -EINVAL; + + rx_pending = virtqueue_get_vring_size(vi->rq[0].vq); + tx_pending = virtqueue_get_vring_size(vi->sq[0].vq); + + if (ring->rx_pending == rx_pending && + ring->tx_pending == tx_pending) + return 0; + + if (ring->rx_pending > virtqueue_get_vring_max_size(vi->rq[0].vq)) + return -EINVAL; + + if (ring->tx_pending > virtqueue_get_vring_max_size(vi->sq[0].vq)) + return -EINVAL; + + for (i = 0; i < vi->max_queue_pairs; i++) { + rq = vi->rq + i; + sq = vi->sq + i; + + if (ring->tx_pending != tx_pending) { + err = virtnet_tx_vq_reset(vi, sq, ring->tx_pending); + if (err) + return err; + } + + if (ring->rx_pending != rx_pending) { + err = virtnet_rx_vq_reset(vi, rq, ring->rx_pending); + if (err) + return err; + } + } + + return 0; +} static void virtnet_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) @@ -2523,6 +2569,7 @@ static const struct ethtool_ops virtnet_ethtool_ops = { .get_drvinfo = virtnet_get_drvinfo, .get_link = ethtool_op_get_link, .get_ringparam = virtnet_get_ringparam, + .set_ringparam = virtnet_set_ringparam, .get_strings = virtnet_get_strings, .get_sset_count = virtnet_get_sset_count, .get_ethtool_stats = virtnet_get_ethtool_stats, -- 2.31.0
? 2022/3/8 ??8:34, Xuan Zhuo ??:> The virtio spec already supports the virtio queue reset function. This patch set > is to add this function to the kernel. The relevant virtio spec information is > here: > > https://github.com/oasis-tcs/virtio-spec/issues/124 > > Also regarding MMIO support for queue reset, I plan to support it after this > patch is passed. > > Performing reset on a queue is divided into four steps: > 1. virtio_reset_vq() - notify the device to reset the queue > 2. virtqueue_detach_unused_buf() - recycle the buffer submitted > 3. virtqueue_reset_vring() - reset the vring (may re-alloc) > 4. virtio_enable_resetq() - mmap vring to device, and enable the queue > > The first part 1-17 of this patch set implements virtio pci's support and API > for queue reset. The latter part is to make virtio-net support set_ringparam. Do > these things for this feature: > > 1. virtio-net support rx,tx reset > 2. find_vqs() support to special the max size of each vq > 3. virtio-net support set_ringparam > > #1 -#3 : prepare > #4 -#12: virtio ring support reset vring of the vq > #13-#14: add helper > #15-#17: virtio pci support reset queue and re-enable > #18-#21: find_vqs() support sizes to special the max size of each vq > #23-#24: virtio-net support rx, tx reset > #22, #25, #26: virtio-net support set ringparam > > Test environment: > Host: 4.19.91 > Qemu: QEMU emulator version 6.2.50 (with vq reset support) > Test Cmd: ethtool -G eth1 rx $1 tx $2; ethtool -g eth1 > > The default is split mode, modify Qemu virtio-net to add PACKED feature to test > packed mode. > > > Please review. Thanks. > > v7: > 1. fix #6 subject typo > 2. fix #6 ring_size_in_bytes is uninitialized > 3. check by: make W=12 > > v6: > 1. virtio_pci: use synchronize_irq(irq) to sync the irq callbacks > 2. Introduce virtqueue_reset_vring() to implement the reset of vring during > the reset process. May use the old vring if num of the vq not change. > 3. find_vqs() support sizes to special the max size of each vq > > v5: > 1. add virtio-net support set_ringparam > > v4: > 1. just the code of virtio, without virtio-net > 2. Performing reset on a queue is divided into these steps: > 1. reset_vq: reset one vq > 2. recycle the buffer from vq by virtqueue_detach_unused_buf() > 3. release the ring of the vq by vring_release_virtqueue() > 4. enable_reset_vq: re-enable the reset queue > 3. Simplify the parameters of enable_reset_vq() > 4. add container structures for virtio_pci_common_cfg > > v3: > 1. keep vq, irq unreleasedThe series became kind of huge. I'd suggest to split it into two series. 1) refactoring of the virtio_ring to prepare for the resize 2) the reset support + virtio-net support Thanks> > *** BLURB HERE *** > > Xuan Zhuo (26): > virtio_pci: struct virtio_pci_common_cfg add queue_notify_data > virtio: queue_reset: add VIRTIO_F_RING_RESET > virtio: add helper virtqueue_get_vring_max_size() > virtio_ring: split: extract the logic of creating vring > virtio_ring: split: extract the logic of init vq and attach vring > virtio_ring: packed: extract the logic of creating vring > virtio_ring: packed: extract the logic of init vq and attach vring > virtio_ring: extract the logic of freeing vring > virtio_ring: split: implement virtqueue_reset_vring_split() > virtio_ring: packed: implement virtqueue_reset_vring_packed() > virtio_ring: introduce virtqueue_reset_vring() > virtio_ring: update the document of the virtqueue_detach_unused_buf > for queue reset > virtio: queue_reset: struct virtio_config_ops add callbacks for > queue_reset > virtio: add helper for queue reset > virtio_pci: queue_reset: update struct virtio_pci_common_cfg and > option functions > virtio_pci: queue_reset: extract the logic of active vq for modern pci > virtio_pci: queue_reset: support VIRTIO_F_RING_RESET > virtio: find_vqs() add arg sizes > virtio_pci: support the arg sizes of find_vqs() > virtio_mmio: support the arg sizes of find_vqs() > virtio: add helper virtio_find_vqs_ctx_size() > virtio_net: get ringparam by virtqueue_get_vring_max_size() > virtio_net: split free_unused_bufs() > virtio_net: support rx/tx queue reset > virtio_net: set the default max ring size by find_vqs() > virtio_net: support set_ringparam > > arch/um/drivers/virtio_uml.c | 2 +- > drivers/net/virtio_net.c | 257 ++++++++-- > drivers/platform/mellanox/mlxbf-tmfifo.c | 3 +- > drivers/remoteproc/remoteproc_virtio.c | 2 +- > drivers/s390/virtio/virtio_ccw.c | 2 +- > drivers/virtio/virtio_mmio.c | 12 +- > drivers/virtio/virtio_pci_common.c | 28 +- > drivers/virtio/virtio_pci_common.h | 3 +- > drivers/virtio/virtio_pci_legacy.c | 8 +- > drivers/virtio/virtio_pci_modern.c | 146 +++++- > drivers/virtio/virtio_pci_modern_dev.c | 36 ++ > drivers/virtio/virtio_ring.c | 584 +++++++++++++++++------ > drivers/virtio/virtio_vdpa.c | 2 +- > include/linux/virtio.h | 12 + > include/linux/virtio_config.h | 74 ++- > include/linux/virtio_pci_modern.h | 2 + > include/uapi/linux/virtio_config.h | 7 +- > include/uapi/linux/virtio_pci.h | 14 + > 18 files changed, 979 insertions(+), 215 deletions(-) > > -- > 2.31.0 >