Due to historical reasons, the implementation of XDP in virtio-net is relatively chaotic. For example, the processing of XDP actions has two copies of similar code. Such as page, xdp_page processing, etc. The purpose of this patch set is to refactor these code. Reduce the difficulty of subsequent maintenance. Subsequent developers will not introduce new bugs because of some complex logical relationships. In addition, the supporting to AF_XDP that I want to submit later will also need to reuse the logic of XDP, such as the processing of actions, I don't want to introduce a new similar code. In this way, I can reuse these codes in the future. Please review. Thanks. v2: 1. re-split to make review more convenient v1: 1. fix some variables are uninitialized Xuan Zhuo (15): virtio_net: mergeable xdp: put old page immediately virtio_net: introduce mergeable_xdp_get_buf() virtio_net: optimize mergeable_xdp_get_buf() virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp virtio_net: separate the logic of freeing xdp shinfo virtio_net: separate the logic of freeing the rest mergeable buf virtio_net: virtnet_build_xdp_buff_mrg() auto release xdp shinfo virtio_net: introduce receive_mergeable_xdp() virtio_net: merge: remove skip_xdp virtio_net: introduce receive_small_xdp() virtio_net: small: remove the delta virtio_net: small: avoid double counting in xdp scenarios virtio_net: small: remove skip_xdp virtio_net: introduce receive_small_build_xdp virtio_net: introduce virtnet_build_skb() drivers/net/virtio_net.c | 657 +++++++++++++++++++++++---------------- 1 file changed, 384 insertions(+), 273 deletions(-) -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 01/15] virtio_net: mergeable xdp: put old page immediately
In the xdp implementation of virtio-net mergeable, it always checks whether two page is used and a page is selected to release. This is complicated for the processing of action, and be careful. In the entire process, we have such principles: * If xdp_page is used (PASS, TX, Redirect), then we release the old page. * If it is a drop case, we will release two. The old page obtained from buf is release inside err_xdp, and xdp_page needs be relased by us. But in fact, when we allocate a new page, we can release the old page immediately. Then just one is using, we just need to release the new page for drop case. On the drop path, err_xdp will release the variable "page", so we only need to let "page" point to the new xdp_page in advance. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index e2560b6f7980..42435e762d72 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1245,6 +1245,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (!xdp_page) goto err_xdp; offset = VIRTIO_XDP_HEADROOM; + + put_page(page); + page = xdp_page; } else if (unlikely(headroom < virtnet_get_headroom(vi))) { xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + sizeof(struct skb_shared_info)); @@ -1259,11 +1262,12 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, page_address(page) + offset, len); frame_sz = PAGE_SIZE; offset = VIRTIO_XDP_HEADROOM; - } else { - xdp_page = page; + + put_page(page); + page = xdp_page; } - data = page_address(xdp_page) + offset; + data = page_address(page) + offset; err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err)) @@ -1278,8 +1282,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (unlikely(!head_skb)) goto err_xdp_frags; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); return head_skb; case XDP_TX: @@ -1297,8 +1299,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_xdp_frags; } *xdp_xmit |= VIRTIO_XDP_TX; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); goto xdp_xmit; case XDP_REDIRECT: @@ -1307,8 +1307,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (err) goto err_xdp_frags; *xdp_xmit |= VIRTIO_XDP_REDIR; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); goto xdp_xmit; default: @@ -1321,9 +1319,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_xdp_frags; } err_xdp_frags: - if (unlikely(xdp_page != page)) - __free_pages(xdp_page, 0); - if (xdp_buff_has_frags(&xdp)) { shinfo = xdp_get_shared_info_from_buff(&xdp); for (i = 0; i < shinfo->nr_frags; i++) { -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 02/15] virtio_net: introduce mergeable_xdp_get_buf()
Separating the logic of preparation for xdp from receive_mergeable. The purpose of this is to simplify the logic of execution of XDP. The main logic here is that when headroom is insufficient, we need to allocate a new page and calculate offset. It should be noted that if there is new page, the variable page will refer to the new page. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 135 +++++++++++++++++++++++---------------- 1 file changed, 79 insertions(+), 56 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 42435e762d72..b8832b0cc215 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1162,6 +1162,81 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, return 0; } +static void *mergeable_xdp_get_buf(struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *ctx, + unsigned int *frame_sz, + int *num_buf, + struct page **page, + int offset, + unsigned int *len, + struct virtio_net_hdr_mrg_rxbuf *hdr) +{ + unsigned int truesize = mergeable_ctx_to_truesize(ctx); + unsigned int headroom = mergeable_ctx_to_headroom(ctx); + struct page *xdp_page; + unsigned int xdp_room; + + /* Transient failure which in theory could occur if + * in-flight packets from before XDP was enabled reach + * the receive path after XDP is loaded. + */ + if (unlikely(hdr->hdr.gso_type)) + return NULL; + + /* Now XDP core assumes frag size is PAGE_SIZE, but buffers + * with headroom may add hole in truesize, which + * make their length exceed PAGE_SIZE. So we disabled the + * hole mechanism for xdp. See add_recvbuf_mergeable(). + */ + *frame_sz = truesize; + + /* This happens when headroom is not enough because + * of the buffer was prefilled before XDP is set. + * This should only happen for the first several packets. + * In fact, vq reset can be used here to help us clean up + * the prefilled buffers, but many existing devices do not + * support it, and we don't want to bother users who are + * using xdp normally. + */ + if (!xdp_prog->aux->xdp_has_frags && + (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) { + /* linearize data for XDP */ + xdp_page = xdp_linearize_page(rq, num_buf, + *page, offset, + VIRTIO_XDP_HEADROOM, + len); + *frame_sz = PAGE_SIZE; + + if (!xdp_page) + return NULL; + offset = VIRTIO_XDP_HEADROOM; + + put_page(*page); + *page = xdp_page; + } else if (unlikely(headroom < virtnet_get_headroom(vi))) { + xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + + sizeof(struct skb_shared_info)); + if (*len + xdp_room > PAGE_SIZE) + return NULL; + + xdp_page = alloc_page(GFP_ATOMIC); + if (!xdp_page) + return NULL; + + memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, + page_address(*page) + offset, *len); + *frame_sz = PAGE_SIZE; + offset = VIRTIO_XDP_HEADROOM; + + put_page(*page); + *page = xdp_page; + } + + return page_address(*page) + offset; +} + static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -1181,7 +1256,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; unsigned int room = SKB_DATA_ALIGN(headroom + tailroom); - unsigned int frame_sz, xdp_room; + unsigned int frame_sz; int err; head_skb = NULL; @@ -1211,63 +1286,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, u32 act; int i; - /* Transient failure which in theory could occur if - * in-flight packets from before XDP was enabled reach - * the receive path after XDP is loaded. - */ - if (unlikely(hdr->hdr.gso_type)) + data = mergeable_xdp_get_buf(vi, rq, xdp_prog, ctx, &frame_sz, + &num_buf, &page, offset, &len, hdr); + if (unlikely(!data)) goto err_xdp; - /* Now XDP core assumes frag size is PAGE_SIZE, but buffers - * with headroom may add hole in truesize, which - * make their length exceed PAGE_SIZE. So we disabled the - * hole mechanism for xdp. See add_recvbuf_mergeable(). - */ - frame_sz = truesize; - - /* This happens when headroom is not enough because - * of the buffer was prefilled before XDP is set. - * This should only happen for the first several packets. - * In fact, vq reset can be used here to help us clean up - * the prefilled buffers, but many existing devices do not - * support it, and we don't want to bother users who are - * using xdp normally. - */ - if (!xdp_prog->aux->xdp_has_frags && - (num_buf > 1 || headroom < virtnet_get_headroom(vi))) { - /* linearize data for XDP */ - xdp_page = xdp_linearize_page(rq, &num_buf, - page, offset, - VIRTIO_XDP_HEADROOM, - &len); - frame_sz = PAGE_SIZE; - - if (!xdp_page) - goto err_xdp; - offset = VIRTIO_XDP_HEADROOM; - - put_page(page); - page = xdp_page; - } else if (unlikely(headroom < virtnet_get_headroom(vi))) { - xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + - sizeof(struct skb_shared_info)); - if (len + xdp_room > PAGE_SIZE) - goto err_xdp; - - xdp_page = alloc_page(GFP_ATOMIC); - if (!xdp_page) - goto err_xdp; - - memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, - page_address(page) + offset, len); - frame_sz = PAGE_SIZE; - offset = VIRTIO_XDP_HEADROOM; - - put_page(page); - page = xdp_page; - } - - data = page_address(page) + offset; err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err)) -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 03/15] virtio_net: optimize mergeable_xdp_get_buf()
The previous patch, in order to facilitate review, I do not do any modification. This patch has made some optimization on the top. * remove some repeated logics in this function. * add fast check for passing without any alloc. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index b8832b0cc215..6f66d73097b4 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1192,6 +1192,11 @@ static void *mergeable_xdp_get_buf(struct virtnet_info *vi, */ *frame_sz = truesize; + if (likely(headroom >= virtnet_get_headroom(vi) && + (*num_buf == 1 || xdp_prog->aux->xdp_has_frags))) { + return page_address(*page) + offset; + } + /* This happens when headroom is not enough because * of the buffer was prefilled before XDP is set. * This should only happen for the first several packets. @@ -1200,22 +1205,15 @@ static void *mergeable_xdp_get_buf(struct virtnet_info *vi, * support it, and we don't want to bother users who are * using xdp normally. */ - if (!xdp_prog->aux->xdp_has_frags && - (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) { + if (!xdp_prog->aux->xdp_has_frags) { /* linearize data for XDP */ xdp_page = xdp_linearize_page(rq, num_buf, *page, offset, VIRTIO_XDP_HEADROOM, len); - *frame_sz = PAGE_SIZE; - if (!xdp_page) return NULL; - offset = VIRTIO_XDP_HEADROOM; - - put_page(*page); - *page = xdp_page; - } else if (unlikely(headroom < virtnet_get_headroom(vi))) { + } else { xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + sizeof(struct skb_shared_info)); if (*len + xdp_room > PAGE_SIZE) @@ -1227,14 +1225,15 @@ static void *mergeable_xdp_get_buf(struct virtnet_info *vi, memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, page_address(*page) + offset, *len); - *frame_sz = PAGE_SIZE; - offset = VIRTIO_XDP_HEADROOM; - - put_page(*page); - *page = xdp_page; } - return page_address(*page) + offset; + *frame_sz = PAGE_SIZE; + + put_page(*page); + + *page = xdp_page; + + return page_address(*page) + VIRTIO_XDP_HEADROOM; } static struct sk_buff *receive_mergeable(struct net_device *dev, -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 04/15] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
At present, we have two similar logic to perform the XDP prog. Therefore, this patch separates the code of executing XDP, which is conducive to later maintenance. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 118 +++++++++++++++++++-------------------- 1 file changed, 58 insertions(+), 60 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 6f66d73097b4..264945dd7a35 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -789,6 +789,60 @@ static int virtnet_xdp_xmit(struct net_device *dev, return ret; } +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, + struct net_device *dev, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct xdp_frame *xdpf; + int err; + u32 act; + + act = bpf_prog_run_xdp(xdp_prog, xdp); + stats->xdp_packets++; + + switch (act) { + case XDP_PASS: + return act; + + case XDP_TX: + stats->xdp_tx++; + xdpf = xdp_convert_buff_to_frame(xdp); + if (unlikely(!xdpf)) { + netdev_dbg(dev, "convert buff to frame failed for xdp\n"); + return XDP_DROP; + } + + err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); + if (unlikely(!err)) { + xdp_return_frame_rx_napi(xdpf); + } else if (unlikely(err < 0)) { + trace_xdp_exception(dev, xdp_prog, act); + return XDP_DROP; + } + *xdp_xmit |= VIRTIO_XDP_TX; + return act; + + case XDP_REDIRECT: + stats->xdp_redirects++; + err = xdp_do_redirect(dev, xdp, xdp_prog); + if (err) + return XDP_DROP; + + *xdp_xmit |= VIRTIO_XDP_REDIR; + return act; + + default: + bpf_warn_invalid_xdp_action(dev, xdp_prog, act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(dev, xdp_prog, act); + fallthrough; + case XDP_DROP: + return XDP_DROP; + } +} + static unsigned int virtnet_get_headroom(struct virtnet_info *vi) { return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0; @@ -876,7 +930,6 @@ static struct sk_buff *receive_small(struct net_device *dev, struct page *page = virt_to_head_page(buf); unsigned int delta = 0; struct page *xdp_page; - int err; unsigned int metasize = 0; len -= vi->hdr_len; @@ -898,7 +951,6 @@ static struct sk_buff *receive_small(struct net_device *dev, xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset; - struct xdp_frame *xdpf; struct xdp_buff xdp; void *orig_data; u32 act; @@ -931,8 +983,8 @@ static struct sk_buff *receive_small(struct net_device *dev, xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, xdp_headroom, len, true); orig_data = xdp.data; - act = bpf_prog_run_xdp(xdp_prog, &xdp); - stats->xdp_packets++; + + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { case XDP_PASS: @@ -942,35 +994,10 @@ static struct sk_buff *receive_small(struct net_device *dev, metasize = xdp.data - xdp.data_meta; break; case XDP_TX: - stats->xdp_tx++; - xdpf = xdp_convert_buff_to_frame(&xdp); - if (unlikely(!xdpf)) - goto err_xdp; - err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); - if (unlikely(!err)) { - xdp_return_frame_rx_napi(xdpf); - } else if (unlikely(err < 0)) { - trace_xdp_exception(vi->dev, xdp_prog, act); - goto err_xdp; - } - *xdp_xmit |= VIRTIO_XDP_TX; - rcu_read_unlock(); - goto xdp_xmit; case XDP_REDIRECT: - stats->xdp_redirects++; - err = xdp_do_redirect(dev, &xdp, xdp_prog); - if (err) - goto err_xdp; - *xdp_xmit |= VIRTIO_XDP_REDIR; rcu_read_unlock(); goto xdp_xmit; default: - bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act); - fallthrough; - case XDP_ABORTED: - trace_xdp_exception(vi->dev, xdp_prog, act); - goto err_xdp; - case XDP_DROP: goto err_xdp; } } @@ -1278,7 +1305,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (xdp_prog) { unsigned int xdp_frags_truesz = 0; struct skb_shared_info *shinfo; - struct xdp_frame *xdpf; struct page *xdp_page; struct xdp_buff xdp; void *data; @@ -1295,8 +1321,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (unlikely(err)) goto err_xdp_frags; - act = bpf_prog_run_xdp(xdp_prog, &xdp); - stats->xdp_packets++; + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { case XDP_PASS: @@ -1307,38 +1332,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, rcu_read_unlock(); return head_skb; case XDP_TX: - stats->xdp_tx++; - xdpf = xdp_convert_buff_to_frame(&xdp); - if (unlikely(!xdpf)) { - netdev_dbg(dev, "convert buff to frame failed for xdp\n"); - goto err_xdp_frags; - } - err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); - if (unlikely(!err)) { - xdp_return_frame_rx_napi(xdpf); - } else if (unlikely(err < 0)) { - trace_xdp_exception(vi->dev, xdp_prog, act); - goto err_xdp_frags; - } - *xdp_xmit |= VIRTIO_XDP_TX; - rcu_read_unlock(); - goto xdp_xmit; case XDP_REDIRECT: - stats->xdp_redirects++; - err = xdp_do_redirect(dev, &xdp, xdp_prog); - if (err) - goto err_xdp_frags; - *xdp_xmit |= VIRTIO_XDP_REDIR; rcu_read_unlock(); goto xdp_xmit; default: - bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act); - fallthrough; - case XDP_ABORTED: - trace_xdp_exception(vi->dev, xdp_prog, act); - fallthrough; - case XDP_DROP: - goto err_xdp_frags; + break; } err_xdp_frags: if (xdp_buff_has_frags(&xdp)) { -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 05/15] virtio_net: separate the logic of freeing xdp shinfo
This patch introduce a new function that releases the xdp shinfo. The subsequent patch will reuse this function. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 264945dd7a35..b4eb083ebf55 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -789,6 +789,21 @@ static int virtnet_xdp_xmit(struct net_device *dev, return ret; } +static void put_xdp_frags(struct xdp_buff *xdp) +{ + struct skb_shared_info *shinfo; + struct page *xdp_page; + int i; + + if (xdp_buff_has_frags(xdp)) { + shinfo = xdp_get_shared_info_from_buff(xdp); + for (i = 0; i < shinfo->nr_frags; i++) { + xdp_page = skb_frag_page(&shinfo->frags[i]); + put_page(xdp_page); + } + } +} + static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, struct net_device *dev, unsigned int *xdp_xmit, @@ -1304,12 +1319,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { unsigned int xdp_frags_truesz = 0; - struct skb_shared_info *shinfo; - struct page *xdp_page; struct xdp_buff xdp; void *data; u32 act; - int i; data = mergeable_xdp_get_buf(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page, offset, &len, hdr); @@ -1339,14 +1351,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, break; } err_xdp_frags: - if (xdp_buff_has_frags(&xdp)) { - shinfo = xdp_get_shared_info_from_buff(&xdp); - for (i = 0; i < shinfo->nr_frags; i++) { - xdp_page = skb_frag_page(&shinfo->frags[i]); - put_page(xdp_page); - } - } - + put_xdp_frags(&xdp); goto err_xdp; } rcu_read_unlock(); -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 06/15] virtio_net: separate the logic of freeing the rest mergeable buf
This patch introduce a new function that frees the rest mergeable buf. The subsequent patch will reuse this function. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 36 ++++++++++++++++++++++++------------ 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index b4eb083ebf55..5f37a1cef61e 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1067,6 +1067,28 @@ static struct sk_buff *receive_big(struct net_device *dev, return NULL; } +static void mergeable_buf_free(struct receive_queue *rq, int num_buf, + struct net_device *dev, + struct virtnet_rq_stats *stats) +{ + struct page *page; + void *buf; + int len; + + while (num_buf-- > 1) { + buf = virtqueue_get_buf(rq->vq, &len); + if (unlikely(!buf)) { + pr_debug("%s: rx error: %d buffers missing\n", + dev->name, num_buf); + dev->stats.rx_length_errors++; + break; + } + stats->bytes += len; + page = virt_to_head_page(buf); + put_page(page); + } +} + /* Why not use xdp_build_skb_from_frame() ? * XDP core assumes that xdp frags are PAGE_SIZE in length, while in * virtio-net there are 2 points that do not match its requirements: @@ -1427,18 +1449,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, stats->xdp_drops++; err_skb: put_page(page); - while (num_buf-- > 1) { - buf = virtqueue_get_buf(rq->vq, &len); - if (unlikely(!buf)) { - pr_debug("%s: rx error: %d buffers missing\n", - dev->name, num_buf); - dev->stats.rx_length_errors++; - break; - } - stats->bytes += len; - page = virt_to_head_page(buf); - put_page(page); - } + mergeable_buf_free(rq, num_buf, dev, stats); + err_buf: stats->drops++; dev_kfree_skb(head_skb); -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 07/15] virtio_net: virtnet_build_xdp_buff_mrg() auto release xdp shinfo
virtnet_build_xdp_buff_mrg() auto release xdp shinfo then the caller no need to careful the xdp shinfo. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio_net.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 5f37a1cef61e..971320c17c73 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1190,7 +1190,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, dev->name, *num_buf, virtio16_to_cpu(vi->vdev, hdr->num_buffers)); dev->stats.rx_length_errors++; - return -EINVAL; + goto err; } stats->bytes += len; @@ -1209,7 +1209,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, pr_debug("%s: rx error: len %u exceeds truesize %lu\n", dev->name, len, (unsigned long)(truesize - room)); dev->stats.rx_length_errors++; - return -EINVAL; + goto err; } frag = &shinfo->frags[shinfo->nr_frags++]; @@ -1224,6 +1224,10 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, *xdp_frags_truesize = xdp_frags_truesz; return 0; + +err: + put_xdp_frags(xdp); + return -EINVAL; } static void *mergeable_xdp_get_buf(struct virtnet_info *vi, @@ -1353,7 +1357,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err)) - goto err_xdp_frags; + goto err_xdp; act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 08/15] virtio_net: introduce receive_mergeable_xdp()
The purpose of this patch is to simplify the receive_mergeable(). Separate all the logic of XDP into a function. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 105 ++++++++++++++++++++++++--------------- 1 file changed, 64 insertions(+), 41 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 971320c17c73..520d48088455 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1304,6 +1304,66 @@ static void *mergeable_xdp_get_buf(struct virtnet_info *vi, return page_address(*page) + VIRTIO_XDP_HEADROOM; } +static struct sk_buff *receive_mergeable_xdp(struct net_device *dev, + struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *buf, + void *ctx, + unsigned int len, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct virtio_net_hdr_mrg_rxbuf *hdr = buf; + int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); + struct page *page = virt_to_head_page(buf); + int offset = buf - page_address(page); + unsigned int xdp_frags_truesz = 0; + struct sk_buff *head_skb; + unsigned int frame_sz; + struct xdp_buff xdp; + void *data; + u32 act; + int err; + + data = mergeable_xdp_get_buf(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page, + offset, &len, hdr); + if (unlikely(!data)) + goto err_xdp; + + err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, + &num_buf, &xdp_frags_truesz, stats); + if (unlikely(err)) + goto err_xdp; + + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); + + switch (act) { + case XDP_PASS: + head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); + if (unlikely(!head_skb)) + break; + return head_skb; + + case XDP_TX: + case XDP_REDIRECT: + return NULL; + + default: + break; + } + + put_xdp_frags(&xdp); + +err_xdp: + put_page(page); + mergeable_buf_free(rq, num_buf, dev, stats); + + stats->xdp_drops++; + stats->drops++; + return NULL; +} + static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -1323,8 +1383,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; unsigned int room = SKB_DATA_ALIGN(headroom + tailroom); - unsigned int frame_sz; - int err; head_skb = NULL; stats->bytes += len - vi->hdr_len; @@ -1344,41 +1402,10 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, rcu_read_lock(); xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { - unsigned int xdp_frags_truesz = 0; - struct xdp_buff xdp; - void *data; - u32 act; - - data = mergeable_xdp_get_buf(vi, rq, xdp_prog, ctx, &frame_sz, - &num_buf, &page, offset, &len, hdr); - if (unlikely(!data)) - goto err_xdp; - - err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, - &num_buf, &xdp_frags_truesz, stats); - if (unlikely(err)) - goto err_xdp; - - act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); - - switch (act) { - case XDP_PASS: - head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); - if (unlikely(!head_skb)) - goto err_xdp_frags; - - rcu_read_unlock(); - return head_skb; - case XDP_TX: - case XDP_REDIRECT: - rcu_read_unlock(); - goto xdp_xmit; - default: - break; - } -err_xdp_frags: - put_xdp_frags(&xdp); - goto err_xdp; + head_skb = receive_mergeable_xdp(dev, vi, rq, xdp_prog, buf, ctx, + len, xdp_xmit, stats); + rcu_read_unlock(); + return head_skb; } rcu_read_unlock(); @@ -1448,9 +1475,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, ewma_pkt_len_add(&rq->mrg_avg_pkt_len, head_skb->len); return head_skb; -err_xdp: - rcu_read_unlock(); - stats->xdp_drops++; err_skb: put_page(page); mergeable_buf_free(rq, num_buf, dev, stats); @@ -1458,7 +1482,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, err_buf: stats->drops++; dev_kfree_skb(head_skb); -xdp_xmit: return NULL; } -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 09/15] virtio_net: merge: remove skip_xdp
Now, the logic of merge xdp process is simple, we can remove the skip_xdp. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 23 ++++++++++------------- 1 file changed, 10 insertions(+), 13 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 520d48088455..55ecd1ce8a1e 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1378,7 +1378,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, struct page *page = virt_to_head_page(buf); int offset = buf - page_address(page); struct sk_buff *head_skb, *curr_skb; - struct bpf_prog *xdp_prog; unsigned int truesize = mergeable_ctx_to_truesize(ctx); unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; @@ -1394,22 +1393,20 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_skb; } - if (likely(!vi->xdp_enabled)) { - xdp_prog = NULL; - goto skip_xdp; - } + if (unlikely(vi->xdp_enabled)) { + struct bpf_prog *xdp_prog; - rcu_read_lock(); - xdp_prog = rcu_dereference(rq->xdp_prog); - if (xdp_prog) { - head_skb = receive_mergeable_xdp(dev, vi, rq, xdp_prog, buf, ctx, - len, xdp_xmit, stats); + rcu_read_lock(); + xdp_prog = rcu_dereference(rq->xdp_prog); + if (xdp_prog) { + head_skb = receive_mergeable_xdp(dev, vi, rq, xdp_prog, buf, ctx, + len, xdp_xmit, stats); + rcu_read_unlock(); + return head_skb; + } rcu_read_unlock(); - return head_skb; } - rcu_read_unlock(); -skip_xdp: head_skb = page_to_skb(vi, rq, page, offset, len, truesize, headroom); curr_skb = head_skb; -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 10/15] virtio_net: introduce receive_small_xdp()
The purpose of this patch is to simplify the receive_small(). Separate all the logic of XDP of small into a function. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 165 ++++++++++++++++++++++++--------------- 1 file changed, 100 insertions(+), 65 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 55ecd1ce8a1e..3b0f13ab6ccb 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -927,6 +927,99 @@ static struct page *xdp_linearize_page(struct receive_queue *rq, return NULL; } +static struct sk_buff *receive_small_xdp(struct net_device *dev, + struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *buf, + unsigned int xdp_headroom, + unsigned int len, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom; + unsigned int headroom = vi->hdr_len + header_offset; + struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset; + struct page *page = virt_to_head_page(buf); + struct page *xdp_page; + unsigned int buflen; + struct xdp_buff xdp; + struct sk_buff *skb; + unsigned int delta = 0; + unsigned int metasize = 0; + void *orig_data; + u32 act; + + if (unlikely(hdr->hdr.gso_type)) + goto err_xdp; + + buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + + if (unlikely(xdp_headroom < virtnet_get_headroom(vi))) { + int offset = buf - page_address(page) + header_offset; + unsigned int tlen = len + vi->hdr_len; + int num_buf = 1; + + xdp_headroom = virtnet_get_headroom(vi); + header_offset = VIRTNET_RX_PAD + xdp_headroom; + headroom = vi->hdr_len + header_offset; + buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + xdp_page = xdp_linearize_page(rq, &num_buf, page, + offset, header_offset, + &tlen); + if (!xdp_page) + goto err_xdp; + + buf = page_address(xdp_page); + put_page(page); + page = xdp_page; + } + + xdp_init_buff(&xdp, buflen, &rq->xdp_rxq); + xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, + xdp_headroom, len, true); + orig_data = xdp.data; + + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); + + switch (act) { + case XDP_PASS: + /* Recalculate length in case bpf program changed it */ + delta = orig_data - xdp.data; + len = xdp.data_end - xdp.data; + metasize = xdp.data - xdp.data_meta; + break; + + case XDP_TX: + case XDP_REDIRECT: + goto xdp_xmit; + + default: + goto err_xdp; + } + + skb = build_skb(buf, buflen); + if (!skb) + goto err; + + skb_reserve(skb, headroom - delta); + skb_put(skb, len); + if (metasize) + skb_metadata_set(skb, metasize); + + return skb; + +err_xdp: + stats->xdp_drops++; +err: + stats->drops++; + put_page(page); +xdp_xmit: + return NULL; +} + static struct sk_buff *receive_small(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -943,9 +1036,6 @@ static struct sk_buff *receive_small(struct net_device *dev, unsigned int buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); struct page *page = virt_to_head_page(buf); - unsigned int delta = 0; - struct page *xdp_page; - unsigned int metasize = 0; len -= vi->hdr_len; stats->bytes += len; @@ -965,56 +1055,10 @@ static struct sk_buff *receive_small(struct net_device *dev, rcu_read_lock(); xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { - struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset; - struct xdp_buff xdp; - void *orig_data; - u32 act; - - if (unlikely(hdr->hdr.gso_type)) - goto err_xdp; - - if (unlikely(xdp_headroom < virtnet_get_headroom(vi))) { - int offset = buf - page_address(page) + header_offset; - unsigned int tlen = len + vi->hdr_len; - int num_buf = 1; - - xdp_headroom = virtnet_get_headroom(vi); - header_offset = VIRTNET_RX_PAD + xdp_headroom; - headroom = vi->hdr_len + header_offset; - buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - xdp_page = xdp_linearize_page(rq, &num_buf, page, - offset, header_offset, - &tlen); - if (!xdp_page) - goto err_xdp; - - buf = page_address(xdp_page); - put_page(page); - page = xdp_page; - } - - xdp_init_buff(&xdp, buflen, &rq->xdp_rxq); - xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, - xdp_headroom, len, true); - orig_data = xdp.data; - - act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); - - switch (act) { - case XDP_PASS: - /* Recalculate length in case bpf program changed it */ - delta = orig_data - xdp.data; - len = xdp.data_end - xdp.data; - metasize = xdp.data - xdp.data_meta; - break; - case XDP_TX: - case XDP_REDIRECT: - rcu_read_unlock(); - goto xdp_xmit; - default: - goto err_xdp; - } + skb = receive_small_xdp(dev, vi, rq, xdp_prog, buf, xdp_headroom, + len, xdp_xmit, stats); + rcu_read_unlock(); + return skb; } rcu_read_unlock(); @@ -1022,25 +1066,16 @@ static struct sk_buff *receive_small(struct net_device *dev, skb = build_skb(buf, buflen); if (!skb) goto err; - skb_reserve(skb, headroom - delta); + skb_reserve(skb, headroom); skb_put(skb, len); - if (!xdp_prog) { - buf += header_offset; - memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); - } /* keep zeroed vnet hdr since XDP is loaded */ - - if (metasize) - skb_metadata_set(skb, metasize); + buf += header_offset; + memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); return skb; -err_xdp: - rcu_read_unlock(); - stats->xdp_drops++; err: stats->drops++; put_page(page); -xdp_xmit: return NULL; } -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 11/15] virtio_net: small: remove the delta
In the case of XDP-PASS, skb_reserve uses the "delta" to compatible non-XDP, now that is not shared between xdp and non-xdp, so we can remove this logic. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 3b0f13ab6ccb..b8ec642899c9 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -945,9 +945,7 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev, unsigned int buflen; struct xdp_buff xdp; struct sk_buff *skb; - unsigned int delta = 0; unsigned int metasize = 0; - void *orig_data; u32 act; if (unlikely(hdr->hdr.gso_type)) @@ -980,14 +978,12 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev, xdp_init_buff(&xdp, buflen, &rq->xdp_rxq); xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, xdp_headroom, len, true); - orig_data = xdp.data; act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { case XDP_PASS: /* Recalculate length in case bpf program changed it */ - delta = orig_data - xdp.data; len = xdp.data_end - xdp.data; metasize = xdp.data - xdp.data_meta; break; @@ -1004,7 +1000,7 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev, if (!skb) goto err; - skb_reserve(skb, headroom - delta); + skb_reserve(skb, xdp.data - buf); skb_put(skb, len); if (metasize) skb_metadata_set(skb, metasize); -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 12/15] virtio_net: small: avoid double counting in xdp scenarios
Avoid the problem that some variables(headroom and so on) will repeat the calculation when process xdp. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index b8ec642899c9..f832ab8a3e6e 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1027,11 +1027,10 @@ static struct sk_buff *receive_small(struct net_device *dev, struct sk_buff *skb; struct bpf_prog *xdp_prog; unsigned int xdp_headroom = (unsigned long)ctx; - unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom; - unsigned int headroom = vi->hdr_len + header_offset; - unsigned int buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); struct page *page = virt_to_head_page(buf); + unsigned int header_offset; + unsigned int headroom; + unsigned int buflen; len -= vi->hdr_len; stats->bytes += len; @@ -1059,6 +1058,11 @@ static struct sk_buff *receive_small(struct net_device *dev, rcu_read_unlock(); skip_xdp: + header_offset = VIRTNET_RX_PAD + xdp_headroom; + headroom = vi->hdr_len + header_offset; + buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + skb = build_skb(buf, buflen); if (!skb) goto err; -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 13/15] virtio_net: small: remove skip_xdp
Because the skb build code is not shared between xdp and non-xdp, and the xdp code in receive_small() is simpler, so "skip_xdp" is not needed. We can remove it. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index f832ab8a3e6e..a76db3d19eb7 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1024,13 +1024,12 @@ static struct sk_buff *receive_small(struct net_device *dev, unsigned int *xdp_xmit, struct virtnet_rq_stats *stats) { - struct sk_buff *skb; - struct bpf_prog *xdp_prog; unsigned int xdp_headroom = (unsigned long)ctx; struct page *page = virt_to_head_page(buf); unsigned int header_offset; unsigned int headroom; unsigned int buflen; + struct sk_buff *skb; len -= vi->hdr_len; stats->bytes += len; @@ -1042,22 +1041,21 @@ static struct sk_buff *receive_small(struct net_device *dev, goto err; } - if (likely(!vi->xdp_enabled)) { - xdp_prog = NULL; - goto skip_xdp; - } + if (unlikely(vi->xdp_enabled)) { + struct bpf_prog *xdp_prog; - rcu_read_lock(); - xdp_prog = rcu_dereference(rq->xdp_prog); - if (xdp_prog) { - skb = receive_small_xdp(dev, vi, rq, xdp_prog, buf, xdp_headroom, - len, xdp_xmit, stats); + rcu_read_lock(); + xdp_prog = rcu_dereference(rq->xdp_prog); + if (xdp_prog) { + skb = receive_small_xdp(dev, vi, rq, xdp_prog, buf, + xdp_headroom, len, xdp_xmit, + stats); + rcu_read_unlock(); + return skb; + } rcu_read_unlock(); - return skb; } - rcu_read_unlock(); -skip_xdp: header_offset = VIRTNET_RX_PAD + xdp_headroom; headroom = vi->hdr_len + header_offset; buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 14/15] virtio_net: introduce receive_small_build_xdp
Simplifying receive_small() function. Bringing the logic relating to build_skb together. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 48 ++++++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 17 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index a76db3d19eb7..c526852a8547 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -927,6 +927,34 @@ static struct page *xdp_linearize_page(struct receive_queue *rq, return NULL; } +static struct sk_buff *receive_small_build_skb(struct virtnet_info *vi, + unsigned int xdp_headroom, + void *buf, + unsigned int len) +{ + unsigned int header_offset; + unsigned int headroom; + unsigned int buflen; + struct sk_buff *skb; + + header_offset = VIRTNET_RX_PAD + xdp_headroom; + headroom = vi->hdr_len + header_offset; + buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + + skb = build_skb(buf, buflen); + if (!skb) + return NULL; + + skb_reserve(skb, headroom); + skb_put(skb, len); + + buf += header_offset; + memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); + + return skb; +} + static struct sk_buff *receive_small_xdp(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -1026,9 +1054,6 @@ static struct sk_buff *receive_small(struct net_device *dev, { unsigned int xdp_headroom = (unsigned long)ctx; struct page *page = virt_to_head_page(buf); - unsigned int header_offset; - unsigned int headroom; - unsigned int buflen; struct sk_buff *skb; len -= vi->hdr_len; @@ -1056,20 +1081,9 @@ static struct sk_buff *receive_small(struct net_device *dev, rcu_read_unlock(); } - header_offset = VIRTNET_RX_PAD + xdp_headroom; - headroom = vi->hdr_len + header_offset; - buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - - skb = build_skb(buf, buflen); - if (!skb) - goto err; - skb_reserve(skb, headroom); - skb_put(skb, len); - - buf += header_offset; - memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); - return skb; + skb = receive_small_build_skb(vi, xdp_headroom, buf, len); + if (likely(skb)) + return skb; err: stats->drops++; -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Apr-27 03:05 UTC
[PATCH net-next v4 15/15] virtio_net: introduce virtnet_build_skb()
This logic is used in multiple places, now we separate it into a helper. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 34 +++++++++++++++++++++------------- 1 file changed, 21 insertions(+), 13 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index c526852a8547..256bcd0890a2 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -443,6 +443,22 @@ static unsigned int mergeable_ctx_to_truesize(void *mrg_ctx) return (unsigned long)mrg_ctx & ((1 << MRG_CTX_HEADER_SHIFT) - 1); } +static struct sk_buff *virtnet_build_skb(void *buf, unsigned int buflen, + unsigned int headroom, + unsigned int len) +{ + struct sk_buff *skb; + + skb = build_skb(buf, buflen); + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, headroom); + skb_put(skb, len); + + return skb; +} + /* Called from bottom half context */ static struct sk_buff *page_to_skb(struct virtnet_info *vi, struct receive_queue *rq, @@ -476,13 +492,10 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, /* copy small packet so we can reuse these pages */ if (!NET_IP_ALIGN && len > GOOD_COPY_LEN && tailroom >= shinfo_size) { - skb = build_skb(buf, truesize); + skb = virtnet_build_skb(buf, truesize, p - buf, len); if (unlikely(!skb)) return NULL; - skb_reserve(skb, p - buf); - skb_put(skb, len); - page = (struct page *)page->private; if (page) give_pages(rq, page); @@ -942,13 +955,10 @@ static struct sk_buff *receive_small_build_skb(struct virtnet_info *vi, buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - skb = build_skb(buf, buflen); - if (!skb) + skb = virtnet_build_skb(buf, buflen, headroom, len); + if (unlikely(!skb)) return NULL; - skb_reserve(skb, headroom); - skb_put(skb, len); - buf += header_offset; memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); @@ -1024,12 +1034,10 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev, goto err_xdp; } - skb = build_skb(buf, buflen); - if (!skb) + skb = virtnet_build_skb(buf, buflen, xdp.data - buf, len); + if (unlikely(!skb)) goto err; - skb_reserve(skb, xdp.data - buf); - skb_put(skb, len); if (metasize) skb_metadata_set(skb, metasize); -- 2.32.0.3.g01195cf9f
Paolo Abeni
2023-Apr-27 06:55 UTC
[PATCH net-next v4 00/15] virtio_net: refactor xdp codes
On Thu, 2023-04-27 at 11:05 +0800, Xuan Zhuo wrote:> Due to historical reasons, the implementation of XDP in virtio-net is relatively > chaotic. For example, the processing of XDP actions has two copies of similar > code. Such as page, xdp_page processing, etc. > > The purpose of this patch set is to refactor these code. Reduce the difficulty > of subsequent maintenance. Subsequent developers will not introduce new bugs > because of some complex logical relationships. > > In addition, the supporting to AF_XDP that I want to submit later will also need > to reuse the logic of XDP, such as the processing of actions, I don't want to > introduce a new similar code. In this way, I can reuse these codes in the > future.## Form letter - net-next-closed The merge window for v6.3 has begun and therefore net-next is closed for new drivers, features, code refactoring and optimizations. We are currently accepting bug fixes only. Please repost when net-next reopens after May 8th. RFC patches sent for review only are obviously welcome at any time. -- pw-bot: defer