Due to historical reasons, the implementation of XDP in virtio-net is relatively chaotic. For example, the processing of XDP actions has two copies of similar code. Such as page, xdp_page processing, etc. The purpose of this patch set is to refactor these code. Reduce the difficulty of subsequent maintenance. Subsequent developers will not introduce new bugs because of some complex logical relationships. In addition, the supporting to AF_XDP that I want to submit later will also need to reuse the logic of XDP, such as the processing of actions, I don't want to introduce a new similar code. In this way, I can reuse these codes in the future. Please review. Thanks. Xuan Zhuo (8): virtio_net: mergeable xdp: put old page immediately virtio_net: mergeable xdp: introduce mergeable_xdp_prepare virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp virtio_net: separate the logic of freeing xdp shinfo virtio_net: separate the logic of freeing the rest mergeable buf virtio_net: auto release xdp shinfo virtio_net: introduce receive_mergeable_xdp() virtio_net: introduce receive_small_xdp() drivers/net/virtio_net.c | 615 +++++++++++++++++++++++---------------- 1 file changed, 357 insertions(+), 258 deletions(-) -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-22 03:03 UTC
[PATCH net-next 1/8] virtio_net: mergeable xdp: put old page immediately
In the xdp implementation of virtio-net mergeable, it always checks whether two page is used and a page is selected to release. This is complicated for the processing of action, and be careful. In the entire process, we have such principles: * If xdp_page is used (PASS, TX, Redirect), then we release the old page. * If it is a drop case, we will release two. The old page obtained from buf is release inside err_xdp, and xdp_page needs be relased by us. But in fact, when we allocate a new page, we can release the old page immediately. Then just one is using, we just need to release the new page for drop case. On the drop path, err_xdp will release the variable "page", so we only need to let "page" point to the new xdp_page in advance. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index e2560b6f7980..4d2bf1ce0730 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1245,6 +1245,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (!xdp_page) goto err_xdp; offset = VIRTIO_XDP_HEADROOM; + + put_page(page); + page = xdp_page; } else if (unlikely(headroom < virtnet_get_headroom(vi))) { xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + sizeof(struct skb_shared_info)); @@ -1259,6 +1262,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, page_address(page) + offset, len); frame_sz = PAGE_SIZE; offset = VIRTIO_XDP_HEADROOM; + + put_page(page); + page = xdp_page; } else { xdp_page = page; } @@ -1278,8 +1284,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (unlikely(!head_skb)) goto err_xdp_frags; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); return head_skb; case XDP_TX: @@ -1297,8 +1301,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_xdp_frags; } *xdp_xmit |= VIRTIO_XDP_TX; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); goto xdp_xmit; case XDP_REDIRECT: @@ -1307,8 +1309,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (err) goto err_xdp_frags; *xdp_xmit |= VIRTIO_XDP_REDIR; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); goto xdp_xmit; default: @@ -1321,9 +1321,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_xdp_frags; } err_xdp_frags: - if (unlikely(xdp_page != page)) - __free_pages(xdp_page, 0); - if (xdp_buff_has_frags(&xdp)) { shinfo = xdp_get_shared_info_from_buff(&xdp); for (i = 0; i < shinfo->nr_frags; i++) { -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-22 03:03 UTC
[PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
Separating the logic of preparation for xdp from receive_mergeable. The purpose of this is to simplify the logic of execution of XDP. The main logic here is that when headroom is insufficient, we need to allocate a new page and calculate offset. It should be noted that if there is new page, the variable page will refer to the new page. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 135 ++++++++++++++++++++++----------------- 1 file changed, 77 insertions(+), 58 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 4d2bf1ce0730..bb426958cdd4 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1162,6 +1162,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, return 0; } +static void *mergeable_xdp_prepare(struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *ctx, + unsigned int *frame_sz, + int *num_buf, + struct page **page, + int offset, + unsigned int *len, + struct virtio_net_hdr_mrg_rxbuf *hdr) +{ + unsigned int truesize = mergeable_ctx_to_truesize(ctx); + unsigned int headroom = mergeable_ctx_to_headroom(ctx); + struct page *xdp_page; + unsigned int xdp_room; + + /* Transient failure which in theory could occur if + * in-flight packets from before XDP was enabled reach + * the receive path after XDP is loaded. + */ + if (unlikely(hdr->hdr.gso_type)) + return NULL; + + /* Now XDP core assumes frag size is PAGE_SIZE, but buffers + * with headroom may add hole in truesize, which + * make their length exceed PAGE_SIZE. So we disabled the + * hole mechanism for xdp. See add_recvbuf_mergeable(). + */ + *frame_sz = truesize; + + /* This happens when headroom is not enough because + * of the buffer was prefilled before XDP is set. + * This should only happen for the first several packets. + * In fact, vq reset can be used here to help us clean up + * the prefilled buffers, but many existing devices do not + * support it, and we don't want to bother users who are + * using xdp normally. + */ + if (!xdp_prog->aux->xdp_has_frags && + (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) { + /* linearize data for XDP */ + xdp_page = xdp_linearize_page(rq, num_buf, + *page, offset, + VIRTIO_XDP_HEADROOM, + len); + + if (!xdp_page) + return NULL; + } else if (unlikely(headroom < virtnet_get_headroom(vi))) { + xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + + sizeof(struct skb_shared_info)); + if (*len + xdp_room > PAGE_SIZE) + return NULL; + + xdp_page = alloc_page(GFP_ATOMIC); + if (!xdp_page) + return NULL; + + memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, + page_address(*page) + offset, *len); + } else { + return page_address(*page) + offset; + } + + *frame_sz = PAGE_SIZE; + + put_page(*page); + + *page = xdp_page; + + return page_address(xdp_page) + VIRTIO_XDP_HEADROOM; +} + static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -1181,7 +1254,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; unsigned int room = SKB_DATA_ALIGN(headroom + tailroom); - unsigned int frame_sz, xdp_room; + unsigned int frame_sz; int err; head_skb = NULL; @@ -1211,65 +1284,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, u32 act; int i; - /* Transient failure which in theory could occur if - * in-flight packets from before XDP was enabled reach - * the receive path after XDP is loaded. - */ - if (unlikely(hdr->hdr.gso_type)) + data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page, + offset, &len, hdr); + if (!data) goto err_xdp; - /* Now XDP core assumes frag size is PAGE_SIZE, but buffers - * with headroom may add hole in truesize, which - * make their length exceed PAGE_SIZE. So we disabled the - * hole mechanism for xdp. See add_recvbuf_mergeable(). - */ - frame_sz = truesize; - - /* This happens when headroom is not enough because - * of the buffer was prefilled before XDP is set. - * This should only happen for the first several packets. - * In fact, vq reset can be used here to help us clean up - * the prefilled buffers, but many existing devices do not - * support it, and we don't want to bother users who are - * using xdp normally. - */ - if (!xdp_prog->aux->xdp_has_frags && - (num_buf > 1 || headroom < virtnet_get_headroom(vi))) { - /* linearize data for XDP */ - xdp_page = xdp_linearize_page(rq, &num_buf, - page, offset, - VIRTIO_XDP_HEADROOM, - &len); - frame_sz = PAGE_SIZE; - - if (!xdp_page) - goto err_xdp; - offset = VIRTIO_XDP_HEADROOM; - - put_page(page); - page = xdp_page; - } else if (unlikely(headroom < virtnet_get_headroom(vi))) { - xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + - sizeof(struct skb_shared_info)); - if (len + xdp_room > PAGE_SIZE) - goto err_xdp; - - xdp_page = alloc_page(GFP_ATOMIC); - if (!xdp_page) - goto err_xdp; - - memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, - page_address(page) + offset, len); - frame_sz = PAGE_SIZE; - offset = VIRTIO_XDP_HEADROOM; - - put_page(page); - page = xdp_page; - } else { - xdp_page = page; - } - - data = page_address(xdp_page) + offset; err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err)) -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-22 03:03 UTC
[PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
At present, we have two similar logic to perform the XDP prog. Therefore, this PATCH separates the code of executing XDP, which is conducive to later maintenance. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------ 1 file changed, 75 insertions(+), 67 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index bb426958cdd4..72b9d6ee4024 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -301,6 +301,15 @@ struct padded_vnet_hdr { char padding[12]; }; +enum { + /* xdp pass */ + VIRTNET_XDP_RES_PASS, + /* drop packet. the caller needs to release the page. */ + VIRTNET_XDP_RES_DROP, + /* packet is consumed by xdp. the caller needs to do nothing. */ + VIRTNET_XDP_RES_CONSUMED, +}; + static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf); static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf); @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev, return ret; } +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, + struct net_device *dev, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct xdp_frame *xdpf; + int err; + u32 act; + + act = bpf_prog_run_xdp(xdp_prog, xdp); + stats->xdp_packets++; + + switch (act) { + case XDP_PASS: + return VIRTNET_XDP_RES_PASS; + + case XDP_TX: + stats->xdp_tx++; + xdpf = xdp_convert_buff_to_frame(xdp); + if (unlikely(!xdpf)) + return VIRTNET_XDP_RES_DROP; + + err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); + if (unlikely(!err)) { + xdp_return_frame_rx_napi(xdpf); + } else if (unlikely(err < 0)) { + trace_xdp_exception(dev, xdp_prog, act); + return VIRTNET_XDP_RES_DROP; + } + + *xdp_xmit |= VIRTIO_XDP_TX; + return VIRTNET_XDP_RES_CONSUMED; + + case XDP_REDIRECT: + stats->xdp_redirects++; + err = xdp_do_redirect(dev, xdp, xdp_prog); + if (err) + return VIRTNET_XDP_RES_DROP; + + *xdp_xmit |= VIRTIO_XDP_REDIR; + return VIRTNET_XDP_RES_CONSUMED; + + default: + bpf_warn_invalid_xdp_action(dev, xdp_prog, act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(dev, xdp_prog, act); + fallthrough; + case XDP_DROP: + return VIRTNET_XDP_RES_DROP; + } +} + static unsigned int virtnet_get_headroom(struct virtnet_info *vi) { return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0; @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev, struct page *page = virt_to_head_page(buf); unsigned int delta = 0; struct page *xdp_page; - int err; unsigned int metasize = 0; len -= vi->hdr_len; @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev, xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset; - struct xdp_frame *xdpf; struct xdp_buff xdp; void *orig_data; u32 act; @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev, xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, xdp_headroom, len, true); orig_data = xdp.data; - act = bpf_prog_run_xdp(xdp_prog, &xdp); - stats->xdp_packets++; + + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { - case XDP_PASS: + case VIRTNET_XDP_RES_PASS: /* Recalculate length in case bpf program changed it */ delta = orig_data - xdp.data; len = xdp.data_end - xdp.data; metasize = xdp.data - xdp.data_meta; break; - case XDP_TX: - stats->xdp_tx++; - xdpf = xdp_convert_buff_to_frame(&xdp); - if (unlikely(!xdpf)) - goto err_xdp; - err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); - if (unlikely(!err)) { - xdp_return_frame_rx_napi(xdpf); - } else if (unlikely(err < 0)) { - trace_xdp_exception(vi->dev, xdp_prog, act); - goto err_xdp; - } - *xdp_xmit |= VIRTIO_XDP_TX; - rcu_read_unlock(); - goto xdp_xmit; - case XDP_REDIRECT: - stats->xdp_redirects++; - err = xdp_do_redirect(dev, &xdp, xdp_prog); - if (err) - goto err_xdp; - *xdp_xmit |= VIRTIO_XDP_REDIR; + + case VIRTNET_XDP_RES_CONSUMED: rcu_read_unlock(); goto xdp_xmit; - default: - bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act); - fallthrough; - case XDP_ABORTED: - trace_xdp_exception(vi->dev, xdp_prog, act); - goto err_xdp; - case XDP_DROP: + + case VIRTNET_XDP_RES_DROP: goto err_xdp; } } @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (xdp_prog) { unsigned int xdp_frags_truesz = 0; struct skb_shared_info *shinfo; - struct xdp_frame *xdpf; struct page *xdp_page; struct xdp_buff xdp; void *data; @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (unlikely(err)) goto err_xdp_frags; - act = bpf_prog_run_xdp(xdp_prog, &xdp); - stats->xdp_packets++; + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { - case XDP_PASS: + case VIRTNET_XDP_RES_PASS: head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); if (unlikely(!head_skb)) goto err_xdp_frags; rcu_read_unlock(); return head_skb; - case XDP_TX: - stats->xdp_tx++; - xdpf = xdp_convert_buff_to_frame(&xdp); - if (unlikely(!xdpf)) { - netdev_dbg(dev, "convert buff to frame failed for xdp\n"); - goto err_xdp_frags; - } - err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); - if (unlikely(!err)) { - xdp_return_frame_rx_napi(xdpf); - } else if (unlikely(err < 0)) { - trace_xdp_exception(vi->dev, xdp_prog, act); - goto err_xdp_frags; - } - *xdp_xmit |= VIRTIO_XDP_TX; - rcu_read_unlock(); - goto xdp_xmit; - case XDP_REDIRECT: - stats->xdp_redirects++; - err = xdp_do_redirect(dev, &xdp, xdp_prog); - if (err) - goto err_xdp_frags; - *xdp_xmit |= VIRTIO_XDP_REDIR; + + case VIRTNET_XDP_RES_CONSUMED: rcu_read_unlock(); goto xdp_xmit; - default: - bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act); - fallthrough; - case XDP_ABORTED: - trace_xdp_exception(vi->dev, xdp_prog, act); - fallthrough; - case XDP_DROP: + + case VIRTNET_XDP_RES_DROP: goto err_xdp_frags; } err_xdp_frags: -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-22 03:03 UTC
[PATCH net-next 4/8] virtio_net: separate the logic of freeing xdp shinfo
This patch introduce a new function that releases the xdp shinfo. The subsequent patch will reuse this function. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 72b9d6ee4024..09aed60e2f51 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -798,6 +798,21 @@ static int virtnet_xdp_xmit(struct net_device *dev, return ret; } +static void put_xdp_frags(struct xdp_buff *xdp) +{ + struct skb_shared_info *shinfo; + struct page *xdp_page; + int i; + + if (xdp_buff_has_frags(xdp)) { + shinfo = xdp_get_shared_info_from_buff(xdp); + for (i = 0; i < shinfo->nr_frags; i++) { + xdp_page = skb_frag_page(&shinfo->frags[i]); + put_page(xdp_page); + } + } +} + static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, struct net_device *dev, unsigned int *xdp_xmit, @@ -1312,12 +1327,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { unsigned int xdp_frags_truesz = 0; - struct skb_shared_info *shinfo; - struct page *xdp_page; struct xdp_buff xdp; void *data; u32 act; - int i; data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page, offset, &len, hdr); @@ -1348,14 +1360,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_xdp_frags; } err_xdp_frags: - if (xdp_buff_has_frags(&xdp)) { - shinfo = xdp_get_shared_info_from_buff(&xdp); - for (i = 0; i < shinfo->nr_frags; i++) { - xdp_page = skb_frag_page(&shinfo->frags[i]); - put_page(xdp_page); - } - } - + put_xdp_frags(&xdp); goto err_xdp; } rcu_read_unlock(); -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-22 03:03 UTC
[PATCH net-next 5/8] virtio_net: separate the logic of freeing the rest mergeable buf
This patch introduce a new function that frees the rest mergeable buf. The subsequent patch will reuse this function. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 36 ++++++++++++++++++++++++------------ 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 09aed60e2f51..a3f2bcb3db27 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1076,6 +1076,28 @@ static struct sk_buff *receive_big(struct net_device *dev, return NULL; } +static void mergeable_buf_free(struct receive_queue *rq, int num_buf, + struct net_device *dev, + struct virtnet_rq_stats *stats) +{ + struct page *page; + void *buf; + int len; + + while (num_buf-- > 1) { + buf = virtqueue_get_buf(rq->vq, &len); + if (unlikely(!buf)) { + pr_debug("%s: rx error: %d buffers missing\n", + dev->name, num_buf); + dev->stats.rx_length_errors++; + break; + } + stats->bytes += len; + page = virt_to_head_page(buf); + put_page(page); + } +} + /* Why not use xdp_build_skb_from_frame() ? * XDP core assumes that xdp frags are PAGE_SIZE in length, while in * virtio-net there are 2 points that do not match its requirements: @@ -1436,18 +1458,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, stats->xdp_drops++; err_skb: put_page(page); - while (num_buf-- > 1) { - buf = virtqueue_get_buf(rq->vq, &len); - if (unlikely(!buf)) { - pr_debug("%s: rx error: %d buffers missing\n", - dev->name, num_buf); - dev->stats.rx_length_errors++; - break; - } - stats->bytes += len; - page = virt_to_head_page(buf); - put_page(page); - } + mergeable_buf_free(rq, num_buf, dev, stats); + err_buf: stats->drops++; dev_kfree_skb(head_skb); -- 2.32.0.3.g01195cf9f
virtnet_build_xdp_buff_mrg() and virtnet_xdp_handler() auto release xdp shinfo then the caller no need to careful the xdp shinfo. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index a3f2bcb3db27..136131a7868a 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -833,14 +833,14 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, stats->xdp_tx++; xdpf = xdp_convert_buff_to_frame(xdp); if (unlikely(!xdpf)) - return VIRTNET_XDP_RES_DROP; + goto drop; err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); if (unlikely(!err)) { xdp_return_frame_rx_napi(xdpf); } else if (unlikely(err < 0)) { trace_xdp_exception(dev, xdp_prog, act); - return VIRTNET_XDP_RES_DROP; + goto drop; } *xdp_xmit |= VIRTIO_XDP_TX; @@ -850,7 +850,7 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, stats->xdp_redirects++; err = xdp_do_redirect(dev, xdp, xdp_prog); if (err) - return VIRTNET_XDP_RES_DROP; + goto drop; *xdp_xmit |= VIRTIO_XDP_REDIR; return VIRTNET_XDP_RES_CONSUMED; @@ -862,8 +862,12 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, trace_xdp_exception(dev, xdp_prog, act); fallthrough; case XDP_DROP: - return VIRTNET_XDP_RES_DROP; + goto drop; } + +drop: + put_xdp_frags(xdp); + return VIRTNET_XDP_RES_DROP; } static unsigned int virtnet_get_headroom(struct virtnet_info *vi) @@ -1199,7 +1203,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, dev->name, *num_buf, virtio16_to_cpu(vi->vdev, hdr->num_buffers)); dev->stats.rx_length_errors++; - return -EINVAL; + goto err; } stats->bytes += len; @@ -1218,7 +1222,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, pr_debug("%s: rx error: len %u exceeds truesize %lu\n", dev->name, len, (unsigned long)(truesize - room)); dev->stats.rx_length_errors++; - return -EINVAL; + goto err; } frag = &shinfo->frags[shinfo->nr_frags++]; @@ -1233,6 +1237,10 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, *xdp_frags_truesize = xdp_frags_truesz; return 0; + +err: + put_xdp_frags(xdp); + return -EINVAL; } static void *mergeable_xdp_prepare(struct virtnet_info *vi, @@ -1361,7 +1369,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err)) - goto err_xdp_frags; + goto err_xdp; act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); @@ -1369,7 +1377,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, case VIRTNET_XDP_RES_PASS: head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); if (unlikely(!head_skb)) - goto err_xdp_frags; + goto err_xdp; rcu_read_unlock(); return head_skb; @@ -1379,11 +1387,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto xdp_xmit; case VIRTNET_XDP_RES_DROP: - goto err_xdp_frags; + goto err_xdp; } -err_xdp_frags: - put_xdp_frags(&xdp); - goto err_xdp; } rcu_read_unlock(); -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-22 03:03 UTC
[PATCH net-next 7/8] virtio_net: introduce receive_mergeable_xdp()
The purpose of this patch is to simplify the receive_mergeable(). Separate all the logic of XDP into a function. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 128 +++++++++++++++++++++++---------------- 1 file changed, 76 insertions(+), 52 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 136131a7868a..6ecb17e972e1 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1316,6 +1316,63 @@ static void *mergeable_xdp_prepare(struct virtnet_info *vi, return page_address(xdp_page) + VIRTIO_XDP_HEADROOM; } +static struct sk_buff *receive_mergeable_xdp(struct net_device *dev, + struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *buf, + void *ctx, + unsigned int len, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct virtio_net_hdr_mrg_rxbuf *hdr = buf; + int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); + struct page *page = virt_to_head_page(buf); + int offset = buf - page_address(page); + unsigned int xdp_frags_truesz = 0; + struct sk_buff *head_skb; + unsigned int frame_sz; + struct xdp_buff xdp; + void *data; + u32 act; + int err; + + data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page, + offset, &len, hdr); + if (!data) + goto err_xdp; + + err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, + &num_buf, &xdp_frags_truesz, stats); + if (unlikely(err)) + goto err_xdp; + + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); + + switch (act) { + case VIRTNET_XDP_RES_PASS: + head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); + if (unlikely(!head_skb)) + goto err_xdp; + return head_skb; + + case VIRTNET_XDP_RES_CONSUMED: + return NULL; + + case VIRTNET_XDP_RES_DROP: + break; + } + +err_xdp: + put_page(page); + mergeable_buf_free(rq, num_buf, dev, stats); + + stats->xdp_drops++; + stats->drops++; + return NULL; +} + static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -1325,18 +1382,16 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, unsigned int *xdp_xmit, struct virtnet_rq_stats *stats) { - struct virtio_net_hdr_mrg_rxbuf *hdr = buf; - int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); - struct page *page = virt_to_head_page(buf); - int offset = buf - page_address(page); - struct sk_buff *head_skb, *curr_skb; - struct bpf_prog *xdp_prog; unsigned int truesize = mergeable_ctx_to_truesize(ctx); unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; unsigned int room = SKB_DATA_ALIGN(headroom + tailroom); - unsigned int frame_sz; - int err; + struct virtio_net_hdr_mrg_rxbuf *hdr; + struct sk_buff *head_skb, *curr_skb; + struct bpf_prog *xdp_prog; + struct page *page; + int num_buf; + int offset; head_skb = NULL; stats->bytes += len - vi->hdr_len; @@ -1348,51 +1403,24 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_skb; } - if (likely(!vi->xdp_enabled)) { - xdp_prog = NULL; - goto skip_xdp; - } - - rcu_read_lock(); - xdp_prog = rcu_dereference(rq->xdp_prog); - if (xdp_prog) { - unsigned int xdp_frags_truesz = 0; - struct xdp_buff xdp; - void *data; - u32 act; - - data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page, - offset, &len, hdr); - if (!data) - goto err_xdp; - - err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, - &num_buf, &xdp_frags_truesz, stats); - if (unlikely(err)) - goto err_xdp; - - act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); - - switch (act) { - case VIRTNET_XDP_RES_PASS: - head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); - if (unlikely(!head_skb)) - goto err_xdp; - + if (likely(vi->xdp_enabled)) { + rcu_read_lock(); + xdp_prog = rcu_dereference(rq->xdp_prog); + if (xdp_prog) { + head_skb = receive_mergeable_xdp(dev, vi, rq, xdp_prog, + buf, ctx, len, xdp_xmit, + stats); rcu_read_unlock(); return head_skb; - - case VIRTNET_XDP_RES_CONSUMED: - rcu_read_unlock(); - goto xdp_xmit; - - case VIRTNET_XDP_RES_DROP: - goto err_xdp; } + rcu_read_unlock(); } - rcu_read_unlock(); -skip_xdp: + hdr = buf; + num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); + page = virt_to_head_page(buf); + offset = buf - page_address(page); + head_skb = page_to_skb(vi, rq, page, offset, len, truesize, headroom); curr_skb = head_skb; @@ -1458,9 +1486,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, ewma_pkt_len_add(&rq->mrg_avg_pkt_len, head_skb->len); return head_skb; -err_xdp: - rcu_read_unlock(); - stats->xdp_drops++; err_skb: put_page(page); mergeable_buf_free(rq, num_buf, dev, stats); @@ -1468,7 +1493,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, err_buf: stats->drops++; dev_kfree_skb(head_skb); -xdp_xmit: return NULL; } -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-22 03:03 UTC
[PATCH net-next 8/8] virtio_net: introduce receive_small_xdp()
The purpose of this patch is to simplify the receive_small(). Separate all the logic of XDP of small into a function. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 165 +++++++++++++++++++++++---------------- 1 file changed, 97 insertions(+), 68 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 6ecb17e972e1..23b1a6fd1224 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -939,6 +939,96 @@ static struct page *xdp_linearize_page(struct receive_queue *rq, return NULL; } +static struct sk_buff *receive_small_xdp(struct net_device *dev, + struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *buf, + void *ctx, + unsigned int len, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + unsigned int xdp_headroom = (unsigned long)ctx; + unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom; + unsigned int headroom = vi->hdr_len + header_offset; + struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset; + struct page *page = virt_to_head_page(buf); + struct page *xdp_page; + unsigned int buflen; + struct xdp_buff xdp; + struct sk_buff *skb; + unsigned int delta = 0; + unsigned int metasize = 0; + void *orig_data; + u32 act; + + if (unlikely(hdr->hdr.gso_type)) + goto err_xdp; + + if (unlikely(xdp_headroom < virtnet_get_headroom(vi))) { + int offset = buf - page_address(page) + header_offset; + unsigned int tlen = len + vi->hdr_len; + int num_buf = 1; + + xdp_headroom = virtnet_get_headroom(vi); + header_offset = VIRTNET_RX_PAD + xdp_headroom; + headroom = vi->hdr_len + header_offset; + buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + xdp_page = xdp_linearize_page(rq, &num_buf, page, + offset, header_offset, + &tlen); + if (!xdp_page) + goto err_xdp; + + buf = page_address(xdp_page); + put_page(page); + page = xdp_page; + } + + xdp_init_buff(&xdp, buflen, &rq->xdp_rxq); + xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, + xdp_headroom, len, true); + orig_data = xdp.data; + + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); + + switch (act) { + case VIRTNET_XDP_RES_PASS: + /* Recalculate length in case bpf program changed it */ + delta = orig_data - xdp.data; + len = xdp.data_end - xdp.data; + metasize = xdp.data - xdp.data_meta; + break; + + case VIRTNET_XDP_RES_CONSUMED: + goto xdp_xmit; + + case VIRTNET_XDP_RES_DROP: + goto err_xdp; + } + + skb = build_skb(buf, buflen); + if (!skb) + goto err; + + skb_reserve(skb, headroom - delta); + skb_put(skb, len); + if (metasize) + skb_metadata_set(skb, metasize); + + return skb; + +err_xdp: + stats->xdp_drops++; +err: + stats->drops++; + put_page(page); +xdp_xmit: + return NULL; +} + static struct sk_buff *receive_small(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -949,15 +1039,11 @@ static struct sk_buff *receive_small(struct net_device *dev, { struct sk_buff *skb; struct bpf_prog *xdp_prog; - unsigned int xdp_headroom = (unsigned long)ctx; - unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom; + unsigned int header_offset = VIRTNET_RX_PAD; unsigned int headroom = vi->hdr_len + header_offset; unsigned int buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); struct page *page = virt_to_head_page(buf); - unsigned int delta = 0; - struct page *xdp_page; - unsigned int metasize = 0; len -= vi->hdr_len; stats->bytes += len; @@ -977,57 +1063,9 @@ static struct sk_buff *receive_small(struct net_device *dev, rcu_read_lock(); xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { - struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset; - struct xdp_buff xdp; - void *orig_data; - u32 act; - - if (unlikely(hdr->hdr.gso_type)) - goto err_xdp; - - if (unlikely(xdp_headroom < virtnet_get_headroom(vi))) { - int offset = buf - page_address(page) + header_offset; - unsigned int tlen = len + vi->hdr_len; - int num_buf = 1; - - xdp_headroom = virtnet_get_headroom(vi); - header_offset = VIRTNET_RX_PAD + xdp_headroom; - headroom = vi->hdr_len + header_offset; - buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - xdp_page = xdp_linearize_page(rq, &num_buf, page, - offset, header_offset, - &tlen); - if (!xdp_page) - goto err_xdp; - - buf = page_address(xdp_page); - put_page(page); - page = xdp_page; - } - - xdp_init_buff(&xdp, buflen, &rq->xdp_rxq); - xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, - xdp_headroom, len, true); - orig_data = xdp.data; - - act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); - - switch (act) { - case VIRTNET_XDP_RES_PASS: - /* Recalculate length in case bpf program changed it */ - delta = orig_data - xdp.data; - len = xdp.data_end - xdp.data; - metasize = xdp.data - xdp.data_meta; - break; - - case VIRTNET_XDP_RES_CONSUMED: - rcu_read_unlock(); - goto xdp_xmit; - - case VIRTNET_XDP_RES_DROP: - goto err_xdp; - } + skb = receive_small_xdp(dev, vi, rq, xdp_prog, buf, ctx, len, xdp_xmit, stats); + rcu_read_unlock(); + return skb; } rcu_read_unlock(); @@ -1035,25 +1073,16 @@ static struct sk_buff *receive_small(struct net_device *dev, skb = build_skb(buf, buflen); if (!skb) goto err; - skb_reserve(skb, headroom - delta); + skb_reserve(skb, headroom); skb_put(skb, len); - if (!xdp_prog) { - buf += header_offset; - memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); - } /* keep zeroed vnet hdr since XDP is loaded */ - - if (metasize) - skb_metadata_set(skb, metasize); + buf += header_offset; + memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); return skb; -err_xdp: - rcu_read_unlock(); - stats->xdp_drops++; err: stats->drops++; put_page(page); -xdp_xmit: return NULL; } -- 2.32.0.3.g01195cf9f
Michael S. Tsirkin
2023-Mar-22 03:34 UTC
[PATCH net-next 0/8] virtio_net: refactor xdp codes
On Wed, Mar 22, 2023 at 11:03:00AM +0800, Xuan Zhuo wrote:> Due to historical reasons, the implementation of XDP in virtio-net is relatively > chaotic. For example, the processing of XDP actions has two copies of similar > code. Such as page, xdp_page processing, etc. > > The purpose of this patch set is to refactor these code. Reduce the difficulty > of subsequent maintenance. Subsequent developers will not introduce new bugs > because of some complex logical relationships. > > In addition, the supporting to AF_XDP that I want to submit later will also need > to reuse the logic of XDP, such as the processing of actions, I don't want to > introduce a new similar code. In this way, I can reuse these codes in the > future. > > Please review. > > Thanks.I really want to see that code make progress though. Would it make sense to merge this one through the virtio tree? Then you will have all the pieces in one place and try to target next linux.> Xuan Zhuo (8): > virtio_net: mergeable xdp: put old page immediately > virtio_net: mergeable xdp: introduce mergeable_xdp_prepare > virtio_net: introduce virtnet_xdp_handler() to seprate the logic of > run xdp > virtio_net: separate the logic of freeing xdp shinfo > virtio_net: separate the logic of freeing the rest mergeable buf > virtio_net: auto release xdp shinfo > virtio_net: introduce receive_mergeable_xdp() > virtio_net: introduce receive_small_xdp() > > drivers/net/virtio_net.c | 615 +++++++++++++++++++++++---------------- > 1 file changed, 357 insertions(+), 258 deletions(-) > > -- > 2.32.0.3.g01195cf9f
Maybe Matching Threads
- [PATCH net-next 7/8] virtio_net: introduce receive_mergeable_xdp()
- [PATCH net-next 6/8] virtio_net: auto release xdp shinfo
- [PATCH net-next 6/8] virtio_net: auto release xdp shinfo
- [PATCH net-next 0/8] virtio_net: refactor xdp codes
- [PATCH net-next 0/8] virtio_net: refactor xdp codes