Displaying 20 results from an estimated 3000 matches similar to: "[PATCHv6 1/3] tun: export underlying socket"
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
Tun device looks similar to a packet socket
in that both pass complete frames from/to userspace.
This patch fills in enough fields in the socket underlying tun driver
to support sendmsg/recvmsg operations, and message flags
MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket
to modules. Regular read/write behaviour is unchanged.
This way, code using raw sockets to inject packets
into
2008 Jul 12
4
[PATCH] tun: Fix/rewrite packet filtering logic
Please see the following thread to get some context on this
http://marc.info/?l=linux-netdev&m=121564433018903&w=2
Basically the issue is that current multi-cast filtering stuff in
the TUN/TAP driver is seriously broken.
Original patch went in without proper review and ACK. It was broken and
confusing to start with and subsequent patches broke it completely.
To give you an idea of
2008 Jul 12
4
[PATCH] tun: Fix/rewrite packet filtering logic
Please see the following thread to get some context on this
http://marc.info/?l=linux-netdev&m=121564433018903&w=2
Basically the issue is that current multi-cast filtering stuff in
the TUN/TAP driver is seriously broken.
Original patch went in without proper review and ACK. It was broken and
confusing to start with and subsequent patches broke it completely.
To give you an idea of
2016 Jun 17
0
[PATCH net-next V2] tun: introduce tx skb ring
On Wed, Jun 15, 2016 at 04:38:17PM +0800, Jason Wang wrote:
> We used to queue tx packets in sk_receive_queue, this is less
> efficient since it requires spinlocks to synchronize between producer
> and consumer.
>
> This patch tries to address this by:
>
> - introduce a new mode which will be only enabled with IFF_TX_ARRAY
> set and switch from sk_receive_queue to a
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often
lead bad cache utilization under heavy load. So this patch tries to do
some batching during rx before submitting them to host network
stack. This is done through accepting MSG_MORE as a hint from
sendmsg() caller, if it was set, batch the packet temporarily in a
linked list and submit them all once MSG_MORE were cleared.
Tests
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often
lead bad cache utilization under heavy load. So this patch tries to do
some batching during rx before submitting them to host network
stack. This is done through accepting MSG_MORE as a hint from
sendmsg() caller, if it was set, batch the packet temporarily in a
linked list and submit them all once MSG_MORE were cleared.
Tests
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- introduce a new mode which will be only enabled with IFF_TX_ARRAY
set and switch from sk_receive_queue to a fixed size of skb
array with 256 entries in this mode.
- introduce a new proto_ops peek_len which was
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- introduce a new mode which will be only enabled with IFF_TX_ARRAY
set and switch from sk_receive_queue to a fixed size of skb
array with 256 entries in this mode.
- introduce a new proto_ops peek_len which was
2018 Sep 06
0
[PATCH net-next 08/11] tun: switch to new type of msg_control
This patch introduces to a new tun/tap specific msg_control:
#define TUN_MSG_UBUF 1
#define TUN_MSG_PTR 2
struct tun_msg_ctl {
int type;
void *ptr;
};
This allows us to pass different kinds of msg_control through
sendmsg(). The first supported type is ubuf (TUN_MSG_UBUF) which will
be used by the existed vhost_net zerocopy code. The second is XDP
buff, which allows vhost_net to
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often
lead bad cache utilization under heavy load. So this patch tries to do
some batching during rx before submitting them to host network
stack. This is done through accepting MSG_MORE as a hint from
sendmsg() caller, if it was set, batch the packet temporarily in a
linked list and submit them all once MSG_MORE were cleared.
Tests
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
On Fri, Jan 06, 2017 at 10:13:17AM +0800, Jason Wang wrote:
> We can only process 1 packet at one time during sendmsg(). This often
> lead bad cache utilization under heavy load. So this patch tries to do
> some batching during rx before submitting them to host network
> stack. This is done through accepting MSG_MORE as a hint from
> sendmsg() caller, if it was set, batch the packet
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
On Fri, Jan 06, 2017 at 10:13:17AM +0800, Jason Wang wrote:
> We can only process 1 packet at one time during sendmsg(). This often
> lead bad cache utilization under heavy load. So this patch tries to do
> some batching during rx before submitting them to host network
> stack. This is done through accepting MSG_MORE as a hint from
> sendmsg() caller, if it was set, batch the packet
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often
lead bad cache utilization under heavy load. So this patch tries to do
some batching during rx before submitting them to host network
stack. This is done through accepting MSG_MORE as a hint from
sendmsg() caller, if it was set, batch the packet temporarily in a
linked list and submit them all once MSG_MORE were cleared.
Tests
2018 Sep 06
1
[PATCH net-next 08/11] tun: switch to new type of msg_control
On Thu, Sep 06, 2018 at 12:05:23PM +0800, Jason Wang wrote:
> This patch introduces to a new tun/tap specific msg_control:
>
> #define TUN_MSG_UBUF 1
> #define TUN_MSG_PTR 2
> struct tun_msg_ctl {
> int type;
> void *ptr;
> };
>
> This allows us to pass different kinds of msg_control through
> sendmsg(). The first supported type is ubuf
2017 Mar 21
12
[PATCH net-next 0/8] vhost-net rx batching
Hi all:
This series tries to implement rx batching for vhost-net. This is done
by batching the dequeuing from skb_array which was exported by
underlayer socket and pass the sbk back through msg_control to finish
userspace copying.
Tests shows at most 19% improvment on rx pps.
Please review.
Thanks
Jason Wang (8):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing
2017 Mar 21
12
[PATCH net-next 0/8] vhost-net rx batching
Hi all:
This series tries to implement rx batching for vhost-net. This is done
by batching the dequeuing from skb_array which was exported by
underlayer socket and pass the sbk back through msg_control to finish
userspace copying.
Tests shows at most 19% improvment on rx pps.
Please review.
Thanks
Jason Wang (8):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing
2016 Jun 30
0
[PATCH net-next V3 6/6] tun: switch to use skb array for tx
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- switch from sk_receive_queue to a skb_array, and resize it when
tx_queue_len was changed.
- introduce a new proto_ops peek_len which was used for peeking the
skb length.
- implement a tun version of peek_len