Displaying 20 results from an estimated 60 matches for "xdp_buff".
2023 Mar 28
1
[PATCH net-next 4/8] virtio_net: separate the logic of freeing xdp shinfo
...ns(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 72b9d6ee4024..09aed60e2f51 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -798,6 +798,21 @@ static int virtnet_xdp_xmit(struct net_device *dev,
return ret;
}
+static void put_xdp_frags(struct xdp_buff *xdp)
+{
+ struct skb_shared_info *shinfo;
+ struct page *xdp_page;
+ int i;
+
+ if (xdp_buff_has_frags(xdp)) {
+ shinfo = xdp_get_shared_info_from_buff(xdp);
+ for (i = 0; i < shinfo->nr_frags; i++) {
+ xdp_page = skb_frag_page(&shinfo->frags[i]);
+ put_page(xdp_page);
+ }
+ }...
2018 Mar 01
3
[PATCH net-next 0/2] virtio-net: re enable XDP_REDIRECT for mergeable buffer
...able() case"). Main concerns are:
>>
>> - not enough tailroom was reserved which breaks cpumap
> To address this at a more fundamental level, I would suggest that we/you
> instead extend XDP to know it's buffers "frame" size/end. (The
> assumption use to be, xdp_buff->data_hard_start + PAGE_SIZE, but
> ixgbe+virtio_net broke that assumption).
>
> It should actually be fairly easy to implement:
> * Simply extend xdp_buff with a "data_hard_end" pointer.
Right, and then cpumap can warn and drop packets with insufficient
tailroom. But i...
2018 Mar 01
3
[PATCH net-next 0/2] virtio-net: re enable XDP_REDIRECT for mergeable buffer
...able() case"). Main concerns are:
>>
>> - not enough tailroom was reserved which breaks cpumap
> To address this at a more fundamental level, I would suggest that we/you
> instead extend XDP to know it's buffers "frame" size/end. (The
> assumption use to be, xdp_buff->data_hard_start + PAGE_SIZE, but
> ixgbe+virtio_net broke that assumption).
>
> It should actually be fairly easy to implement:
> * Simply extend xdp_buff with a "data_hard_end" pointer.
Right, and then cpumap can warn and drop packets with insufficient
tailroom. But i...
2023 Mar 28
1
[PATCH net-next 6/8] virtio_net: auto release xdp shinfo
virtnet_build_xdp_buff_mrg() and virtnet_xdp_handler() auto
release xdp shinfo then the caller no need to careful the xdp shinfo.
Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com>
---
drivers/net/virtio_net.c | 29 +++++++++++++++++------------
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git...
2023 Apr 03
1
[PATCH net-next 6/8] virtio_net: auto release xdp shinfo
? 2023/3/28 20:04, Xuan Zhuo ??:
> virtnet_build_xdp_buff_mrg() and virtnet_xdp_handler() auto
I think you meant virtnet_xdp_handler() actually?
> release xdp shinfo then the caller no need to careful the xdp shinfo.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com>
> ---
> drivers/net/virtio_net.c | 29 ++++++++++++++...
2018 Sep 06
1
[PATCH net-next 10/11] tap: accept an array of XDP buffs through sendmsg()
...t/tap.c b/drivers/net/tap.c
> index 7996ed7cbf18..50eb7bf22225 100644
> --- a/drivers/net/tap.c
> +++ b/drivers/net/tap.c
> @@ -1146,14 +1146,83 @@ static const struct file_operations tap_fops = {
> #endif
> };
>
> +static int tap_get_user_xdp(struct tap_queue *q, struct xdp_buff *xdp)
> +{
> + struct virtio_net_hdr *gso = xdp->data_hard_start + sizeof(int);
> + int buflen = *(int *)xdp->data_hard_start;
> + int vnet_hdr_len = 0;
> + struct tap_dev *tap;
> + struct sk_buff *skb;
> + int err, depth;
> +
> + if (q->flags & IFF_VNET_HDR)...
2018 Mar 01
2
[PATCH net-next 0/2] virtio-net: re enable XDP_REDIRECT for mergeable buffer
...gt;>>>
>>>> - not enough tailroom was reserved which breaks cpumap
>>> To address this at a more fundamental level, I would suggest that we/you
>>> instead extend XDP to know it's buffers "frame" size/end. (The
>>> assumption use to be, xdp_buff->data_hard_start + PAGE_SIZE, but
>>> ixgbe+virtio_net broke that assumption).
>>>
>>> It should actually be fairly easy to implement:
>>> * Simply extend xdp_buff with a "data_hard_end" pointer.
>> Right, and then cpumap can warn and drop p...
2018 Mar 01
2
[PATCH net-next 0/2] virtio-net: re enable XDP_REDIRECT for mergeable buffer
...gt;>>>
>>>> - not enough tailroom was reserved which breaks cpumap
>>> To address this at a more fundamental level, I would suggest that we/you
>>> instead extend XDP to know it's buffers "frame" size/end. (The
>>> assumption use to be, xdp_buff->data_hard_start + PAGE_SIZE, but
>>> ixgbe+virtio_net broke that assumption).
>>>
>>> It should actually be fairly easy to implement:
>>> * Simply extend xdp_buff with a "data_hard_end" pointer.
>> Right, and then cpumap can warn and drop p...
2018 May 21
0
[RFC PATCH net-next 12/12] vhost_net: batch submitting XDP buffers to underlayer sockets
...>type == TUN_MSG_PTR) {
- ret = tun_xdp_one(tun, tfile, ctl->ptr);
- if (!ret)
- ret = total_len;
+ if (ctl && ((ctl->type & 0xF) == TUN_MSG_PTR)) {
+ int n = ctl->type >> 16;
+
+ preempt_disable();
+ rcu_read_lock();
+
+ for (i = 0; i < n; i++) {
+ struct xdp_buff *x = (struct xdp_buff *)ctl->ptr;
+ struct xdp_buff *xdp = &x[i];
+
+ xdp_set_data_meta_invalid(xdp);
+ xdp->rxq = &tfile->xdp_rxq;
+ tun_xdp_one(tun, tfile, xdp);
+ }
+
+ xdp_do_flush_map();
+ tun_xdp_flush(tun->dev);
+
+ rcu_read_unlock();
+ preempt_enable();
+
+...
2023 Mar 28
8
[PATCH net-next 0/8] virtio_net: refactor xdp codes
Due to historical reasons, the implementation of XDP in virtio-net is relatively
chaotic. For example, the processing of XDP actions has two copies of similar
code. Such as page, xdp_page processing, etc.
The purpose of this patch set is to refactor these code. Reduce the difficulty
of subsequent maintenance. Subsequent developers will not introduce new bugs
because of some complex logical
2023 Mar 22
9
[PATCH net-next 0/8] virtio_net: refactor xdp codes
Due to historical reasons, the implementation of XDP in virtio-net is relatively
chaotic. For example, the processing of XDP actions has two copies of similar
code. Such as page, xdp_page processing, etc.
The purpose of this patch set is to refactor these code. Reduce the difficulty
of subsequent maintenance. Subsequent developers will not introduce new bugs
because of some complex logical
2023 Apr 03
1
[PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
...t virtqueue *vq, void *buf);
> static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
>
> @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> return ret;
> }
>
> +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> + struct net_device *dev,
> + unsigned int *xdp_xmit,
> + struct virtnet_rq_stats *stats)
> +{
> + struct xdp_frame *xdpf;
> + int err;
> + u32 act;
> +
> +...
2018 Nov 15
0
[PATCH net-next 2/2] tuntap: free XDP dropped packets in a batch
...kill_fasync(&tfile->fasync, SIGIO, POLL_OUT);
}
+static void tun_put_page(struct tun_page *tpage)
+{
+ if (tpage->page)
+ __page_frag_cache_drain(tpage->page, tpage->count);
+}
+
static int tun_xdp_one(struct tun_struct *tun,
struct tun_file *tfile,
- struct xdp_buff *xdp, int *flush)
+ struct xdp_buff *xdp, int *flush,
+ struct tun_page *tpage)
{
struct tun_xdp_hdr *hdr = xdp->data_hard_start;
struct virtio_net_hdr *gso = &hdr->gso;
@@ -2390,6 +2402,7 @@ static int tun_xdp_one(struct tun_struct *tun,
int buflen = hdr->buflen...
2018 Sep 06
0
[PATCH net-next 10/11] tap: accept an array of XDP buffs through sendmsg()
...2 deletions(-)
diff --git a/drivers/net/tap.c b/drivers/net/tap.c
index 7996ed7cbf18..50eb7bf22225 100644
--- a/drivers/net/tap.c
+++ b/drivers/net/tap.c
@@ -1146,14 +1146,83 @@ static const struct file_operations tap_fops = {
#endif
};
+static int tap_get_user_xdp(struct tap_queue *q, struct xdp_buff *xdp)
+{
+ struct virtio_net_hdr *gso = xdp->data_hard_start + sizeof(int);
+ int buflen = *(int *)xdp->data_hard_start;
+ int vnet_hdr_len = 0;
+ struct tap_dev *tap;
+ struct sk_buff *skb;
+ int err, depth;
+
+ if (q->flags & IFF_VNET_HDR)
+ vnet_hdr_len = READ_ONCE(q->vnet_hdr_s...
2023 Mar 15
10
[RFC net-next 0/8] virtio_net: refactor xdp codes
Due to historical reasons, the implementation of XDP in virtio-net is relatively
chaotic. For example, the processing of XDP actions has two copies of similar
code. Such as page, xdp_page processing, etc.
The purpose of this patch set is to refactor these code. Reduce the difficulty
of subsequent maintenance. Subsequent developers will not introduce new bugs
because of some complex logical
2018 Sep 06
0
[PATCH net-next 09/11] tuntap: accept an array of XDP buffs through sendmsg()
...9db2e5dd08 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -2424,22 +2424,119 @@ static void tun_sock_write_space(struct sock *sk)
kill_fasync(&tfile->fasync, SIGIO, POLL_OUT);
}
+static int tun_xdp_one(struct tun_struct *tun,
+ struct tun_file *tfile,
+ struct xdp_buff *xdp, int *flush)
+{
+ struct virtio_net_hdr *gso = xdp->data_hard_start + sizeof(int);
+ struct tun_pcpu_stats *stats;
+ struct bpf_prog *xdp_prog;
+ struct sk_buff *skb = NULL;
+ u32 rxhash = 0, act;
+ int buflen = *(int *)xdp->data_hard_start;
+ int err = 0;
+ bool skb_xdp = false;
+
+ xdp...
2018 Nov 15
3
[PATCH net-next 1/2] vhost_net: mitigate page reference counting during page frag refill
...nvq->vq;
+ struct vhost_net *net = container_of(vq->dev, struct vhost_net,
+ dev);
struct socket *sock = vq->private_data;
- struct page_frag *alloc_frag = ¤t->task_frag;
+ struct page_frag *alloc_frag = &net->page_frag;
struct virtio_net_hdr *gso;
struct xdp_buff *xdp = &nvq->xdp[nvq->batched_xdp];
struct tun_xdp_hdr *hdr;
@@ -665,7 +708,8 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq,
buflen += SKB_DATA_ALIGN(len + pad);
alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES);
- if (unlikely(!skb_pag...
2018 May 21
20
[RFC PATCH net-next 00/12] XDP batching for TUN/vhost_net
Hi all:
We do not support XDP batching for TUN since it can only receive one
packet a time from vhost_net. This series tries to remove this
limitation by:
- introduce a TUN specific msg_control that can hold a pointer to an
array of XDP buffs
- try copy and build XDP buff in vhost_net
- store XDP buffs in an array and submit them once for every N packets
from vhost_net
- since TUN can only
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
.../tun.c
> @@ -1635,6 +1635,44 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile,
> return true;
> }
>
> +static u32 tun_do_xdp(struct tun_struct *tun,
> + struct tun_file *tfile,
> + struct bpf_prog *xdp_prog,
> + struct xdp_buff *xdp,
> + int *err)
> +{
> + u32 act = bpf_prog_run_xdp(xdp_prog, xdp);
> +
> + switch (act) {
> + case XDP_REDIRECT:
> + *err = xdp_do_redirect(tun->dev, xdp, xdp_prog);
> + xdp_do_flush_map();
> + if (*err)
> + break;
> + goto out;
> + case XDP_...
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
.../tun.c
> @@ -1635,6 +1635,44 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile,
> return true;
> }
>
> +static u32 tun_do_xdp(struct tun_struct *tun,
> + struct tun_file *tfile,
> + struct bpf_prog *xdp_prog,
> + struct xdp_buff *xdp,
> + int *err)
> +{
> + u32 act = bpf_prog_run_xdp(xdp_prog, xdp);
> +
> + switch (act) {
> + case XDP_REDIRECT:
> + *err = xdp_do_redirect(tun->dev, xdp, xdp_prog);
> + xdp_do_flush_map();
> + if (*err)
> + break;
> + goto out;
> + case XDP_...