search for: data_hard_start

Displaying 20 results from an estimated 57 matches for "data_hard_start".

2020 May 06
6
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
We tried to reserve space for vnet header before xdp.data_hard_start. But this is useless since the packet could be modified by XDP which may invalidate the information stored in the header and there's no way for XDP to know the existence of the vnet header currently. So let's just not reserve space for vnet header in this case. Cc: Jesper Dangaard Brouer...
2020 May 06
6
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
We tried to reserve space for vnet header before xdp.data_hard_start. But this is useless since the packet could be modified by XDP which may invalidate the information stored in the header and there's no way for XDP to know the existence of the vnet header currently. So let's just not reserve space for vnet header in this case. Cc: Jesper Dangaard Brouer...
2020 May 06
2
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On 2020/5/6 ??4:21, Jesper Dangaard Brouer wrote: > On Wed, 6 May 2020 14:16:32 +0800 > Jason Wang <jasowang at redhat.com> wrote: > >> We tried to reserve space for vnet header before >> xdp.data_hard_start. But this is useless since the packet could be >> modified by XDP which may invalidate the information stored in the >> header and > IMHO above statements are wrong. XDP cannot access memory before > xdp.data_hard_start. Thus, it is safe to store a vnet headers before > xdp.dat...
2020 May 06
2
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On 2020/5/6 ??4:21, Jesper Dangaard Brouer wrote: > On Wed, 6 May 2020 14:16:32 +0800 > Jason Wang <jasowang at redhat.com> wrote: > >> We tried to reserve space for vnet header before >> xdp.data_hard_start. But this is useless since the packet could be >> modified by XDP which may invalidate the information stored in the >> header and > IMHO above statements are wrong. XDP cannot access memory before > xdp.data_hard_start. Thus, it is safe to store a vnet headers before > xdp.dat...
2020 May 06
2
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On 2020/5/6 ??3:53, Michael S. Tsirkin wrote: > On Wed, May 06, 2020 at 02:16:32PM +0800, Jason Wang wrote: >> We tried to reserve space for vnet header before >> xdp.data_hard_start. But this is useless since the packet could be >> modified by XDP which may invalidate the information stored in the >> header and there's no way for XDP to know the existence of the vnet >> header currently. > What do you mean? Doesn't XDP_PASS use the header in the bu...
2020 May 06
2
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On 2020/5/6 ??3:53, Michael S. Tsirkin wrote: > On Wed, May 06, 2020 at 02:16:32PM +0800, Jason Wang wrote: >> We tried to reserve space for vnet header before >> xdp.data_hard_start. But this is useless since the packet could be >> modified by XDP which may invalidate the information stored in the >> header and there's no way for XDP to know the existence of the vnet >> header currently. > What do you mean? Doesn't XDP_PASS use the header in the bu...
2020 May 06
0
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
...0 at 04:34:36PM +0800, Jason Wang wrote: > > On 2020/5/6 ??4:21, Jesper Dangaard Brouer wrote: > > On Wed, 6 May 2020 14:16:32 +0800 > > Jason Wang <jasowang at redhat.com> wrote: > > > > > We tried to reserve space for vnet header before > > > xdp.data_hard_start. But this is useless since the packet could be > > > modified by XDP which may invalidate the information stored in the > > > header and > > IMHO above statements are wrong. XDP cannot access memory before > > xdp.data_hard_start. Thus, it is safe to store a vnet heade...
2020 May 06
0
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On Wed, 6 May 2020 14:16:32 +0800 Jason Wang <jasowang at redhat.com> wrote: > We tried to reserve space for vnet header before > xdp.data_hard_start. But this is useless since the packet could be > modified by XDP which may invalidate the information stored in the > header and IMHO above statements are wrong. XDP cannot access memory before xdp.data_hard_start. Thus, it is safe to store a vnet headers before xdp.data_hard_start. (The sfc...
2020 May 06
0
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On Wed, May 06, 2020 at 02:16:32PM +0800, Jason Wang wrote: > We tried to reserve space for vnet header before > xdp.data_hard_start. But this is useless since the packet could be > modified by XDP which may invalidate the information stored in the > header and there's no way for XDP to know the existence of the vnet > header currently. What do you mean? Doesn't XDP_PASS use the header in the buffer? > So l...
2020 May 06
0
[PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
On Wed, May 06, 2020 at 04:19:40PM +0800, Jason Wang wrote: > > On 2020/5/6 ??3:53, Michael S. Tsirkin wrote: > > On Wed, May 06, 2020 at 02:16:32PM +0800, Jason Wang wrote: > > > We tried to reserve space for vnet header before > > > xdp.data_hard_start. But this is useless since the packet could be > > > modified by XDP which may invalidate the information stored in the > > > header and there's no way for XDP to know the existence of the vnet > > > header currently. > > What do you mean? Doesn't XDP_PASS u...
2018 Sep 06
1
[PATCH net-next 10/11] tap: accept an array of XDP buffs through sendmsg()
...00644 > --- a/drivers/net/tap.c > +++ b/drivers/net/tap.c > @@ -1146,14 +1146,83 @@ static const struct file_operations tap_fops = { > #endif > }; > > +static int tap_get_user_xdp(struct tap_queue *q, struct xdp_buff *xdp) > +{ > + struct virtio_net_hdr *gso = xdp->data_hard_start + sizeof(int); > + int buflen = *(int *)xdp->data_hard_start; > + int vnet_hdr_len = 0; > + struct tap_dev *tap; > + struct sk_buff *skb; > + int err, depth; > + > + if (q->flags & IFF_VNET_HDR) > + vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz); > + > + skb =...
2018 Sep 06
0
[PATCH net-next 10/11] tap: accept an array of XDP buffs through sendmsg()
.../net/tap.c index 7996ed7cbf18..50eb7bf22225 100644 --- a/drivers/net/tap.c +++ b/drivers/net/tap.c @@ -1146,14 +1146,83 @@ static const struct file_operations tap_fops = { #endif }; +static int tap_get_user_xdp(struct tap_queue *q, struct xdp_buff *xdp) +{ + struct virtio_net_hdr *gso = xdp->data_hard_start + sizeof(int); + int buflen = *(int *)xdp->data_hard_start; + int vnet_hdr_len = 0; + struct tap_dev *tap; + struct sk_buff *skb; + int err, depth; + + if (q->flags & IFF_VNET_HDR) + vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz); + + skb = build_skb(xdp->data_hard_start, buflen); + if (...
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
...t return? > + default: > + bpf_warn_invalid_xdp_action(act); > + /* fall through */ > + case XDP_ABORTED: > + trace_xdp_exception(tun->dev, xdp_prog, act); > + /* fall through */ > + case XDP_DROP: > + break; > + } > + > + put_page(virt_to_head_page(xdp->data_hard_start)); put here because caller does get_page :( Not pretty. I'd move this out to the caller. > +out: > + return act; How about combining err and act? err is < 0 XDP_PASS is > 0. No need for pointers then. > +} > + > static struct sk_buff *tun_build_skb(struct tun_struct *t...
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
...t return? > + default: > + bpf_warn_invalid_xdp_action(act); > + /* fall through */ > + case XDP_ABORTED: > + trace_xdp_exception(tun->dev, xdp_prog, act); > + /* fall through */ > + case XDP_DROP: > + break; > + } > + > + put_page(virt_to_head_page(xdp->data_hard_start)); put here because caller does get_page :( Not pretty. I'd move this out to the caller. > +out: > + return act; How about combining err and act? err is < 0 XDP_PASS is > 0. No need for pointers then. > +} > + > static struct sk_buff *tun_build_skb(struct tun_struct *t...
2018 Sep 06
0
[PATCH net-next 09/11] tuntap: accept an array of XDP buffs through sendmsg()
...-2424,22 +2424,119 @@ static void tun_sock_write_space(struct sock *sk) kill_fasync(&tfile->fasync, SIGIO, POLL_OUT); } +static int tun_xdp_one(struct tun_struct *tun, + struct tun_file *tfile, + struct xdp_buff *xdp, int *flush) +{ + struct virtio_net_hdr *gso = xdp->data_hard_start + sizeof(int); + struct tun_pcpu_stats *stats; + struct bpf_prog *xdp_prog; + struct sk_buff *skb = NULL; + u32 rxhash = 0, act; + int buflen = *(int *)xdp->data_hard_start; + int err = 0; + bool skb_xdp = false; + + xdp_prog = rcu_dereference(tun->xdp_prog); + if (xdp_prog) { + if (gso->...
2018 May 21
2
[RFC PATCH net-next 10/12] vhost_net: build xdp buff
...pu(vq, gso->hdr_len) > len) > + return -EINVAL; > + } > + > + len -= sock_hlen; > + copied = copy_page_from_iter(alloc_frag->page, > + alloc_frag->offset + pad, > + len, from); > + if (copied != len) > + return -EFAULT; > + > + xdp->data_hard_start = buf; > + xdp->data = buf + pad; > + xdp->data_end = xdp->data + len; > + *(int *)(xdp->data_hard_start)= buflen; space before = > + > + get_page(alloc_frag->page); > + alloc_frag->offset += buflen; > + > + return 0; > +} > + > static void hand...
2018 Sep 06
1
[PATCH net-next 09/11] tuntap: accept an array of XDP buffs through sendmsg()
...ck_write_space(struct sock *sk) > kill_fasync(&tfile->fasync, SIGIO, POLL_OUT); > } > > +static int tun_xdp_one(struct tun_struct *tun, > + struct tun_file *tfile, > + struct xdp_buff *xdp, int *flush) > +{ > + struct virtio_net_hdr *gso = xdp->data_hard_start + sizeof(int); > + struct tun_pcpu_stats *stats; > + struct bpf_prog *xdp_prog; > + struct sk_buff *skb = NULL; > + u32 rxhash = 0, act; > + int buflen = *(int *)xdp->data_hard_start; > + int err = 0; > + bool skb_xdp = false; > + > + xdp_prog = rcu_dereference(tun-&gt...
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 06
0
[PATCH net-next 06/11] tuntap: split out XDP logic
...break; + *err = 0; + goto out; + case XDP_PASS: + goto out; + default: + bpf_warn_invalid_xdp_action(act); + /* fall through */ + case XDP_ABORTED: + trace_xdp_exception(tun->dev, xdp_prog, act); + /* fall through */ + case XDP_DROP: + break; + } + + put_page(virt_to_head_page(xdp->data_hard_start)); +out: + return act; +} + static struct sk_buff *tun_build_skb(struct tun_struct *tun, struct tun_file *tfile, struct iov_iter *from, @@ -1645,10 +1683,10 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, struct sk_buff *skb = NULL; struct bpf_prog *xdp_prog...
2018 Mar 01
3
[PATCH net-next 0/2] virtio-net: re enable XDP_REDIRECT for mergeable buffer
...uot;). Main concerns are: >> >> - not enough tailroom was reserved which breaks cpumap > To address this at a more fundamental level, I would suggest that we/you > instead extend XDP to know it's buffers "frame" size/end. (The > assumption use to be, xdp_buff->data_hard_start + PAGE_SIZE, but > ixgbe+virtio_net broke that assumption). > > It should actually be fairly easy to implement: > * Simply extend xdp_buff with a "data_hard_end" pointer. Right, and then cpumap can warn and drop packets with insufficient tailroom. But it should be a patch...