search for: sock_flags

Displaying 20 results from an estimated 38 matches for "sock_flags".

Did you mean: sock_flag
2023 Aug 22
0
[RFC PATCH v1 1/2] vsock: send SIGPIPE on write to shutdowned socket
On Mon, Aug 14, 2023 at 10:46:05PM +0300, Arseniy Krasnov wrote: > > >On 04.08.2023 17:28, Stefano Garzarella wrote: >> On Fri, Aug 04, 2023 at 03:46:47PM +0300, Arseniy Krasnov wrote: >>> Hi Stefano, >>> >>> On 02.08.2023 10:46, Stefano Garzarella wrote: >>>> On Tue, Aug 01, 2023 at 05:17:26PM +0300, Arseniy Krasnov wrote: >>>>>
2018 Sep 06
1
[PATCH net-next 08/11] tun: switch to new type of msg_control
On Thu, Sep 06, 2018 at 12:05:23PM +0800, Jason Wang wrote: > This patch introduces to a new tun/tap specific msg_control: > > #define TUN_MSG_UBUF 1 > #define TUN_MSG_PTR 2 > struct tun_msg_ctl { > int type; > void *ptr; > }; > > This allows us to pass different kinds of msg_control through > sendmsg(). The first supported type is ubuf
2018 Sep 06
1
[PATCH net-next 01/11] net: sock: introduce SOCK_XDP
...lse > + sock_reset_flag(&tfile->sk, SOCK_XDP); > + } > + > return 0; > } > > diff --git a/include/net/sock.h b/include/net/sock.h > index 433f45fc2d68..38cae35f6e16 100644 > --- a/include/net/sock.h > +++ b/include/net/sock.h > @@ -800,6 +800,7 @@ enum sock_flags { > SOCK_SELECT_ERR_QUEUE, /* Wake select on error queue */ > SOCK_RCU_FREE, /* wait rcu grace period in sk_destruct() */ > SOCK_TXTIME, > + SOCK_XDP, /* XDP is attached */ > }; > > #define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTA...
2018 Sep 06
0
[PATCH net-next 08/11] tun: switch to new type of msg_control
This patch introduces to a new tun/tap specific msg_control: #define TUN_MSG_UBUF 1 #define TUN_MSG_PTR 2 struct tun_msg_ctl { int type; void *ptr; }; This allows us to pass different kinds of msg_control through sendmsg(). The first supported type is ubuf (TUN_MSG_UBUF) which will be used by the existed vhost_net zerocopy code. The second is XDP buff, which allows vhost_net to
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
On Thu, Sep 06, 2018 at 12:05:26PM +0800, Jason Wang wrote: > This patch implements XDP batching for vhost_net. The idea is first to > try to do userspace copy and build XDP buff directly in vhost. Instead > of submitting the packet immediately, vhost_net will batch them in an > array and submit every 64 (VHOST_NET_BATCH) packets to the under layer > sockets through msg_control of
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
On Thu, Sep 06, 2018 at 12:05:26PM +0800, Jason Wang wrote: > This patch implements XDP batching for vhost_net. The idea is first to > try to do userspace copy and build XDP buff directly in vhost. Instead > of submitting the packet immediately, vhost_net will batch them in an > array and submit every 64 (VHOST_NET_BATCH) packets to the under layer > sockets through msg_control of
2023 Jul 21
2
[Bridge] [PATCH] can: j1939: prevent deadlock by changing j1939_socks_lock to rwlock
The following 3 locks would race against each other, causing the deadlock situation in the Syzbot bug report: - j1939_socks_lock - active_session_list_lock - sk_session_queue_lock A reasonable fix is to change j1939_socks_lock to an rwlock, since in the rare situations where a write lock is required for the linked list that j1939_socks_lock is protecting, the code does not attempt to acquire any
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 06
0
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
This patch implements XDP batching for vhost_net. The idea is first to try to do userspace copy and build XDP buff directly in vhost. Instead of submitting the packet immediately, vhost_net will batch them in an array and submit every 64 (VHOST_NET_BATCH) packets to the under layer sockets through msg_control of sendmsg(). When XDP is enabled on the TUN/TAP, TUN/TAP can process XDP inside a loop
2018 Sep 12
0
[PATCH net-next V2 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
This patch implements XDP batching for vhost_net. The idea is first to try to do userspace copy and build XDP buff directly in vhost. Instead of submitting the packet immediately, vhost_net will batch them in an array and submit every 64 (VHOST_NET_BATCH) packets to the under layer sockets through msg_control of sendmsg(). When XDP is enabled on the TUN/TAP, TUN/TAP can process XDP inside a loop
2018 Sep 07
0
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
On 2018?09?07? 00:46, Michael S. Tsirkin wrote: > On Thu, Sep 06, 2018 at 12:05:26PM +0800, Jason Wang wrote: >> This patch implements XDP batching for vhost_net. The idea is first to >> try to do userspace copy and build XDP buff directly in vhost. Instead >> of submitting the packet immediately, vhost_net will batch them in an >> array and submit every 64
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 06
0
[PATCH net-next 01/11] net: sock: introduce SOCK_XDP
...g) + sock_set_flag(&tfile->sk, SOCK_XDP); + else + sock_reset_flag(&tfile->sk, SOCK_XDP); + } + return 0; } diff --git a/include/net/sock.h b/include/net/sock.h index 433f45fc2d68..38cae35f6e16 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -800,6 +800,7 @@ enum sock_flags { SOCK_SELECT_ERR_QUEUE, /* Wake select on error queue */ SOCK_RCU_FREE, /* wait rcu grace period in sk_destruct() */ SOCK_TXTIME, + SOCK_XDP, /* XDP is attached */ }; #define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE)) -- 2.17.1
2019 Nov 08
1
[PATCH] vsock/virtio: fix sock refcnt holding during the shutdown
The "42f5cda5eaf4" commit rightly set SOCK_DONE on peer shutdown, but there is an issue if we receive the SHUTDOWN(RDWR) while the virtio_transport_close_timeout() is scheduled. In this case, when the timeout fires, the SOCK_DONE is already set and the virtio_transport_close_timeout() will not call virtio_transport_reset() and virtio_transport_do_close(). This causes that both sockets
2009 Apr 22
0
networking problems in kinit
Hi all, I need UDP support in kinit. But when I did the implementation I ran into some fundamental problems, maybe someone can help me. First, the description what I did so far: The network interface itself has been configured by 'do_ipconfig' with a static configuration from the kernel cmdline: ip=192.168.1.100:192.168.1.200:192.168.1.254:255.255.255.0::eth0: I added the following
2015 Feb 04
2
[PATCH v3 17/18] vhost: don't bother copying iovecs in handle_rx(), kill memcpy_toiovecend()
From: Al Viro <viro at zeniv.linux.org.uk> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: kvm at vger.kernel.org Cc: virtualization at lists.linux-foundation.org Signed-off-by: Al Viro <viro at zeniv.linux.org.uk> --- drivers/vhost/net.c | 82 +++++++++++++++-------------------------------------- include/linux/uio.h | 3 -- lib/iovec.c | 26 ----------------- 3 files
2015 Feb 04
2
[PATCH v3 17/18] vhost: don't bother copying iovecs in handle_rx(), kill memcpy_toiovecend()
From: Al Viro <viro at zeniv.linux.org.uk> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: kvm at vger.kernel.org Cc: virtualization at lists.linux-foundation.org Signed-off-by: Al Viro <viro at zeniv.linux.org.uk> --- drivers/vhost/net.c | 82 +++++++++++++++-------------------------------------- include/linux/uio.h | 3 -- lib/iovec.c | 26 ----------------- 3 files
2020 May 29
0
[PATCH] virtio_vsock: Fix race condition in virtio_transport_recv_pkt
Hi Jia, thanks for the patch! I have some comments. On Fri, May 29, 2020 at 09:31:23PM +0800, Jia He wrote: > When client tries to connect(SOCK_STREAM) the server in the guest with NONBLOCK > mode, there will be a panic on a ThunderX2 (armv8a server): > [ 463.718844][ T5040] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 > [ 463.718848][ T5040]
2023 Feb 16
0
[RFC PATCH v1 07/12] vsock/virtio: MGS_ZEROCOPY flag support
On Mon, Feb 06, 2023 at 07:00:35AM +0000, Arseniy Krasnov wrote: >This adds main logic of MSG_ZEROCOPY flag processing for packet >creation. When this flag is set and user's iov iterator fits for >zerocopy transmission, call 'get_user_pages()' and add returned >pages to the newly created skb. > >Signed-off-by: Arseniy Krasnov <AVKrasnov at sberdevices.ru>