Si-Wei Liu
2022-Aug-26 08:52 UTC
[virtio-dev] [PATCH RESEND v2 2/2] virtio-net: use mtu size as buffer length for big packets
Sorry for the delay. Didn't notice as this thread was not addressed to my work email. Please copy to my work email if it needs my immediate attention. On 8/25/2022 5:38 AM, Gavin Li wrote:> Currently add_recvbuf_big() allocates MAX_SKB_FRAGS segments for big > packets even when GUEST_* offloads are not present on the device. > However, if guest GSO is not supported, it would be sufficient to > allocate segments to cover just up the MTU size and no further. > Allocating the maximum amount of segments results in a large waste of > buffer space in the queue, which limits the number of packets that can > be buffered and can result in reduced performance. > > Therefore, if guest GSO is not supported, use the MTU to calculate the > optimal amount of segments required. > > When guest offload is enabled at runtime, RQ already has packets of bytes > less than 64K. So when packet of 64KB arrives, all the packets of such > size will be dropped. and RQ is now not usable. > > So this means that during set_guest_offloads() phase, RQs have to be > destroyed and recreated, which requires almost driver reload.Yes, this needs VIRTIO_F_RING_RESET and disable_vq_and_reset() to be done on RQ and refill it with appropriate size of buffer. Not for this patch anyway.> > If VIRTIO_NET_F_CTRL_GUEST_OFFLOADS has been negotiated, then it should > always treat them as GSO enabled. > > Below is the iperf TCP test results over a Mellanox NIC, using vDPA for > 1 VQ, queue size 1024, before and after the change, with the iperf > server running over the virtio-net interface. > > MTU(Bytes)/Bandwidth (Gbit/s) > Before After > 1500 22.5 22.4 > 9000 12.8 25.9 > > Signed-off-by: Gavin Li<gavinl at nvidia.com> > Reviewed-by: Gavi Teitz<gavi at nvidia.com> > Reviewed-by: Parav Pandit<parav at nvidia.com> > --- > changelog: > v1->v2 > - Addressed comments from Jason, Michael, Si-Wei. > - Remove the flag of guest GSO support, set sg_num for big packets and > use it directly > - Recalculate sg_num for big packets in virtnet_set_guest_offloads > - Replace the round up algorithm with DIV_ROUND_UP > --- > drivers/net/virtio_net.c | 41 +++++++++++++++++++++++++++------------- > 1 file changed, 28 insertions(+), 13 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index e1904877d461..ec8c135a26d6 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -225,6 +225,9 @@ struct virtnet_info { > /* I like... big packets and I cannot lie! */ > bool big_packets; > > + /* number of sg entries allocated for big packets */ > + unsigned int big_packets_sg_num; > + > /* Host will merge rx buffers for big packets (shake it! shake it!) */ > bool mergeable_rx_bufs; > > @@ -1331,10 +1334,10 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, > char *p; > int i, err, offset; > > - sg_init_table(rq->sg, MAX_SKB_FRAGS + 2); > + sg_init_table(rq->sg, vi->big_packets_sg_num + 2); > > - /* page in rq->sg[MAX_SKB_FRAGS + 1] is list tail */ > - for (i = MAX_SKB_FRAGS + 1; i > 1; --i) { > + /* page in rq->sg[vi->big_packets_sg_num + 1] is list tail */ > + for (i = vi->big_packets_sg_num + 1; i > 1; --i) { > first = get_a_page(rq, gfp); > if (!first) { > if (list) > @@ -1365,7 +1368,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, > > /* chain first in list head */ > first->private = (unsigned long)list; > - err = virtqueue_add_inbuf(rq->vq, rq->sg, MAX_SKB_FRAGS + 2, > + err = virtqueue_add_inbuf(rq->vq, rq->sg, vi->big_packets_sg_num + 2, > first, gfp); > if (err < 0) > give_pages(rq, first); > @@ -3690,13 +3693,31 @@ static bool virtnet_check_guest_gso(const struct virtnet_info *vi) > virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO)); > } > > +static void virtnet_set_big_packets_fields(struct virtnet_info *vi, const int mtu) > +{ > + bool guest_gso = virtnet_check_guest_gso(vi); > + > + /* If we can receive ANY GSO packets, we must allocate large ones. */ > + if (mtu > ETH_DATA_LEN || guest_gso) { > + vi->big_packets = true; > + /* if the guest offload is offered by the device, user can modify > + * offload capability. In such posted buffers may short fall of size. > + * Hence allocate for max size. > + */ > + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) > + vi->big_packets_sg_num = MAX_SKB_FRAGS;MAX_SKB_FRAGS is needed when any of the guest_gso capability is offered. This is per spec regardless if VIRTIO_NET_F_CTRL_GUEST_OFFLOADS is negotiated or not. Quoting spec:> If VIRTIO_NET_F_MRG_RXBUF is not negotiated: > > * If VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6 or > VIRTIO_NET_F_GUEST_UFO are negotiated, the driver SHOULD populate > the receive queue(s) with buffers of at least 65562 bytes. >I would just simply put: vi->big_packets_sg_num = guest_gso ? MAX_SKB_FRAGS : DIV_ROUND_UP(mtu, PAGE_SIZE); There needs to be another patch to address the virtnet_set_guest_offloads() case. For now just leave with a TODO comment and keep it as-is (don't start with full MAX_SKB_FRAGS segments).> + else > + vi->big_packets_sg_num = DIV_ROUND_UP(mtu, PAGE_SIZE); > + } > +} > + > static int virtnet_probe(struct virtio_device *vdev) > { > int i, err = -ENOMEM; > struct net_device *dev; > struct virtnet_info *vi; > u16 max_queue_pairs; > - int mtu; > + int mtu = 0; > > /* Find if host supports multiqueue/rss virtio_net device */ > max_queue_pairs = 1; > @@ -3784,10 +3805,6 @@ static int virtnet_probe(struct virtio_device *vdev) > INIT_WORK(&vi->config_work, virtnet_config_changed_work); > spin_lock_init(&vi->refill_lock); > > - /* If we can receive ANY GSO packets, we must allocate large ones. */ > - if (virtnet_check_guest_gso(vi)) > - vi->big_packets = true; > - > if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > vi->mergeable_rx_bufs = true; > > @@ -3853,12 +3870,10 @@ static int virtnet_probe(struct virtio_device *vdev) > > dev->mtu = mtu; > dev->max_mtu = mtu; > - > - /* TODO: size buffers correctly in this case. */ > - if (dev->mtu > ETH_DATA_LEN) > - vi->big_packets = true; > } >Can we add a comment here to note that this essentially has to be dev->max_mtu when F_MTU is negotiated. The implicit assumption of mtu being 0 when F_MTU is not negotiated is not so obvious to readers. Thanks, -Siwei> + virtnet_set_big_packets_fields(vi, mtu); > + > if (vi->any_header_sg) > dev->needed_headroom = vi->hdr_len; >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20220826/66b55878/attachment.html>
Parav Pandit
2022-Aug-26 16:41 UTC
[virtio-dev] [PATCH RESEND v2 2/2] virtio-net: use mtu size as buffer length for big packets
From: Si-Wei Liu <si-wei.liu at oracle.com> Sent: Friday, August 26, 2022 4:52 AM> Sorry for the delay. Didn't notice as this thread was not addressed to my work email. Please copy to my work email if it needs my immediate attention.Can you please setup your mail client to post plain text mail as required by mailing list. Conversation without it is close to impossible to track. + /* If we can receive ANY GSO packets, we must allocate large ones. */ + if (mtu > ETH_DATA_LEN || guest_gso) { + vi->big_packets = true; + /* if the guest offload is offered by the device, user can modify + * offload capability. In such posted buffers may short fall of size. + * Hence allocate for max size. + */ + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) + vi->big_packets_sg_num = MAX_SKB_FRAGS;> MAX_SKB_FRAGS is needed when any of the guest_gso capability is offered. This is per spec regardless if VIRTIO_NET_F_CTRL_GUEST_OFFLOADS is negotiated or not. Quoting spec:> If VIRTIO_NET_F_MRG_RXBUF is not negotiated: > If VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6 or VIRTIO_NET_F_GUEST_UFO are negotiated, the driver SHOULD populate the receive queue(s) with buffers of at least 65562 bytes.Spec recommendation is good here, but Linux driver knows that such offload settings cannot change if the above feature is not offered. So I think we should add the comment and reference to the code to have this optimization.