search for: tap_sendmsg

Displaying 13 results from an estimated 13 matches for "tap_sendmsg".

2018 Dec 20
1
4.20-rc6: WARNING: CPU: 30 PID: 197360 at net/core/flow_dissector.c:764 __skb_flow_dissect
...may have to revisit that assumption. But for now, let's see if we can > > address these edge cases. > > Ack > > > > > > I'm not familiar with tap code, so someone else will need to patch this > > > case, but it looks like: > > > > > > tap_sendmsg() > > > tap_get_user() > > > skb_probe_transport_header() > > > skb_flow_dissect_flow_keys_basic() > > > __skb_flow_dissect() > > > > > > skb->dev is only set later in the code. > > > > tap_...
2018 Sep 13
1
[PATCH net-next V2 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...(unlikely(err < 0)) { > + vq_err(&nvq->vq, "Fail to batch sending packets\n"); > + return; > + } > + > +signal_used: > + vhost_net_signal_used(nvq); > + nvq->batched_xdp = 0; > +} > + Given it's all tun-specific now, how about just exporting tap_sendmsg and calling that? Will get rid of some indirection too. -- MST
2018 Sep 06
1
[PATCH net-next 10/11] tap: accept an array of XDP buffs through sendmsg()
...turn 0; > + > +err_kfree: > + kfree_skb(skb); > +err: > + rcu_read_lock(); > + tap = rcu_dereference(q->tap); > + if (tap && tap->count_tx_dropped) > + tap->count_tx_dropped(tap); > + rcu_read_unlock(); > + return err; > +} > + > static int tap_sendmsg(struct socket *sock, struct msghdr *m, > size_t total_len) > { > struct tap_queue *q = container_of(sock, struct tap_queue, sock); > struct tun_msg_ctl *ctl = m->msg_control; > + struct xdp_buff *xdp; > + int i; > > - if (ctl && ctl->type != TU...
2018 Dec 20
0
4.20-rc6: WARNING: CPU: 30 PID: 197360 at net/core/flow_dissector.c:764 __skb_flow_dissect
...000000000092e32c: e54cf0c80000 mvhi 200(%r15),0 >> 000000000092e332: c01b00000008 nilf %r1,8 >> [85109.572129] Call Trace: >> [85109.572130] ([<0000000000000000>] (null)) >> [85109.572134] [<000003ff800c81e4>] tap_sendmsg+0x384/0x430 [tap] > > I'm not familiar with tap code, so someone else will need to patch this > case, but it looks like: > > tap_sendmsg() > tap_get_user() > skb_probe_transport_header() > skb_flow_dissect_flow_keys_basic() > __skb_flow_di...
2018 Sep 06
1
[PATCH net-next 08/11] tun: switch to new type of msg_control
...|= SKBTX_SHARED_FRAG; > - } else if (m && m->msg_control) { > - struct ubuf_info *uarg = m->msg_control; > + } else if (msg_control) { > + struct ubuf_info *uarg = msg_control; > uarg->callback(uarg, false); > } > > @@ -1150,7 +1150,13 @@ static int tap_sendmsg(struct socket *sock, struct msghdr *m, > size_t total_len) > { > struct tap_queue *q = container_of(sock, struct tap_queue, sock); > - return tap_get_user(q, m, &m->msg_iter, m->msg_flags & MSG_DONTWAIT); > + struct tun_msg_ctl *ctl = m->msg_control; &gt...
2018 Dec 20
0
4.20-rc6: WARNING: CPU: 30 PID: 197360 at net/core/flow_dissector.c:764 __skb_flow_dissect
...5810f0b4 l %r1,180(%r15) 000000000092e32c: e54cf0c80000 mvhi 200(%r15),0 000000000092e332: c01b00000008 nilf %r1,8 [85109.572129] Call Trace: [85109.572130] ([<0000000000000000>] (null)) [85109.572134] [<000003ff800c81e4>] tap_sendmsg+0x384/0x430 [tap] [85109.572137] [<000003ff801acdee>] vhost_tx_batch.isra.10+0x66/0xe0 [vhost_net] [85109.572138] [<000003ff801ad61c>] handle_tx_copy+0x18c/0x568 [vhost_net] [85109.572140] [<000003ff801adab4>] handle_tx+0xbc/0x100 [vhost_net] [85109.572145] [<000003ff80...
2018 Dec 20
0
4.20-rc6: WARNING: CPU: 30 PID: 197360 at net/core/flow_dissector.c:764 __skb_flow_dissect
...t follow what I thought was an invariant. If there are too many exceptions, I may have to revisit that assumption. But for now, let's see if we can address these edge cases. > I'm not familiar with tap code, so someone else will need to patch this > case, but it looks like: > > tap_sendmsg() > tap_get_user() > skb_probe_transport_header() > skb_flow_dissect_flow_keys_basic() > __skb_flow_dissect() > > skb->dev is only set later in the code. tap_get_user uses sock_alloc_send_pskb (through tap_alloc_skb) to allocate the skb....
2018 Sep 06
0
[PATCH net-next 10/11] tap: accept an array of XDP buffs through sendmsg()
...b); + } else { + kfree_skb(skb); + } + rcu_read_unlock(); + + return 0; + +err_kfree: + kfree_skb(skb); +err: + rcu_read_lock(); + tap = rcu_dereference(q->tap); + if (tap && tap->count_tx_dropped) + tap->count_tx_dropped(tap); + rcu_read_unlock(); + return err; +} + static int tap_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len) { struct tap_queue *q = container_of(sock, struct tap_queue, sock); struct tun_msg_ctl *ctl = m->msg_control; + struct xdp_buff *xdp; + int i; - if (ctl && ctl->type != TUN_MSG_UBUF) - return -EINVAL; + if (ctl...
2018 Sep 06
0
[PATCH net-next 08/11] tun: switch to new type of msg_control
...ROCOPY; skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; - } else if (m && m->msg_control) { - struct ubuf_info *uarg = m->msg_control; + } else if (msg_control) { + struct ubuf_info *uarg = msg_control; uarg->callback(uarg, false); } @@ -1150,7 +1150,13 @@ static int tap_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len) { struct tap_queue *q = container_of(sock, struct tap_queue, sock); - return tap_get_user(q, m, &m->msg_iter, m->msg_flags & MSG_DONTWAIT); + struct tun_msg_ctl *ctl = m->msg_control; + + if (ctl && ctl-&...
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 12
1
[PATCH v2 02/17] compat_ioctl: move drivers to generic_compat_ioctl_ptrarg
....open = tap_open, @@ -1141,9 +1133,7 @@ static const struct file_operations tap_fops = { .poll = tap_poll, .llseek = no_llseek, .unlocked_ioctl = tap_ioctl, -#ifdef CONFIG_COMPAT - .compat_ioctl = tap_compat_ioctl, -#endif + .compat_ioctl = generic_compat_ioctl_ptrarg, }; static int tap_sendmsg(struct socket *sock, struct msghdr *m, diff --git a/drivers/staging/pi433/pi433_if.c b/drivers/staging/pi433/pi433_if.c index c85a805a1243..9e4caf7ad384 100644 --- a/drivers/staging/pi433/pi433_if.c +++ b/drivers/staging/pi433/pi433_if.c @@ -945,16 +945,6 @@ pi433_ioctl(struct file *filp, unsigned...