search for: tunning

Displaying 20 results from an estimated 1763 matches for "tunning".

Did you mean: running
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote: > On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote: > > > > So how about this? We replace the dev destructor with our own that > > doesn't immediately call free_netdev. We only call free_netdev once > > all tun fd's attached to the device have been closed. > > Here's the patch.
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote: > On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote: > > > > So how about this? We replace the dev destructor with our own that > > doesn't immediately call free_netdev. We only call free_netdev once > > all tun fd's attached to the device have been closed. > > Here's the patch.
2019 Apr 11
1
Tinc sudden spike in traffic usage
I just encountered a weird issue on my servers - Tinc was using a constant 10-50% CPU on several servers, and these servers were also receiving a constant ~3 Mb/s of data over the Tinc interface, which is usually otherwise pretty quiet. Example: https://d.sb/2019/04/firefox_11-15.54.22.png Grafana dashboard: https://dash.d.sb/dashboard/snapshot/6nWZqagpgxzxUrybDZkNbF6JSflLlKmO?orgId=1 This seems
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
Tun device looks similar to a packet socket in that both pass complete frames from/to userspace. This patch fills in enough fields in the socket underlying tun driver to support sendmsg/recvmsg operations, and message flags MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket to modules. Regular read/write behaviour is unchanged. This way, code using raw sockets to inject packets into
2009 Nov 04
0
[PATCHv8 1/3] tun: export underlying socket
Tun device looks similar to a packet socket in that both pass complete frames from/to userspace. This patch fills in enough fields in the socket underlying tun driver to support sendmsg/recvmsg operations, and message flags MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket to modules. Regular read/write behaviour is unchanged. This way, code using raw sockets to inject packets into
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
Tun device looks similar to a packet socket in that both pass complete frames from/to userspace. This patch fills in enough fields in the socket underlying tun driver to support sendmsg/recvmsg operations, and message flags MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket to modules. Regular read/write behaviour is unchanged. This way, code using raw sockets to inject packets into
2009 Nov 03
1
[PATCHv7 1/3] tun: export underlying socket
Tun device looks similar to a packet socket in that both pass complete frames from/to userspace. This patch fills in enough fields in the socket underlying tun driver to support sendmsg/recvmsg operations, and message flags MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket to modules. Regular read/write behaviour is unchanged. This way, code using raw sockets to inject packets into
2009 Nov 02
1
[PATCHv6 1/3] tun: export underlying socket
Tun device looks similar to a packet socket in that both pass complete frames from/to userspace. This patch fills in enough fields in the socket underlying tun driver to support sendmsg/recvmsg operations, and message flags MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket to modules. Regular read/write behaviour is unchanged. This way, code using raw sockets to inject packets into
2009 Nov 02
1
[PATCHv6 1/3] tun: export underlying socket
Tun device looks similar to a packet socket in that both pass complete frames from/to userspace. This patch fills in enough fields in the socket underlying tun driver to support sendmsg/recvmsg operations, and message flags MSG_TRUNC and MSG_DONTWAIT, and exports access to this socket to modules. Regular read/write behaviour is unchanged. This way, code using raw sockets to inject packets into
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
...I suspect it's the issue of queue selection in both guest driver and tap. Would continue to investigate. - I would post the perforamnce numbers as a reply of this mail. TODO: - solve the issue of packet transmission of small packets. - addressing the comments of virtio-net driver - performance tunning Please review and comment it, Thanks. --- Jason Wang (5): tuntap: move socket/sock related structures to tun_file tuntap: categorize ioctl tuntap: introduce multiqueue related flags tuntap: multiqueue support tuntap: add ioctls to attach or detach a file form tap de...
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
...I suspect it's the issue of queue selection in both guest driver and tap. Would continue to investigate. - I would post the perforamnce numbers as a reply of this mail. TODO: - solve the issue of packet transmission of small packets. - addressing the comments of virtio-net driver - performance tunning Please review and comment it, Thanks. --- Jason Wang (5): tuntap: move socket/sock related structures to tun_file tuntap: categorize ioctl tuntap: introduce multiqueue related flags tuntap: multiqueue support tuntap: add ioctls to attach or detach a file form tap de...
2008 Jul 12
4
[PATCH] tun: Fix/rewrite packet filtering logic
Please see the following thread to get some context on this http://marc.info/?l=linux-netdev&m=121564433018903&w=2 Basically the issue is that current multi-cast filtering stuff in the TUN/TAP driver is seriously broken. Original patch went in without proper review and ACK. It was broken and confusing to start with and subsequent patches broke it completely. To give you an idea of
2008 Jul 12
4
[PATCH] tun: Fix/rewrite packet filtering logic
Please see the following thread to get some context on this http://marc.info/?l=linux-netdev&m=121564433018903&w=2 Basically the issue is that current multi-cast filtering stuff in the TUN/TAP driver is seriously broken. Original patch went in without proper review and ACK. It was broken and confusing to start with and subsequent patches broke it completely. To give you an idea of
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
On Thu, Sep 06, 2018 at 12:05:21PM +0800, Jason Wang wrote: > This patch split out XDP logic into a single function. This make it to > be reused by XDP batching path in the following patch. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/net/tun.c | 84 ++++++++++++++++++++++++++++------------------- > 1 file changed, 51 insertions(+), 33
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
On Thu, Sep 06, 2018 at 12:05:21PM +0800, Jason Wang wrote: > This patch split out XDP logic into a single function. This make it to > be reused by XDP batching path in the following patch. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/net/tun.c | 84 ++++++++++++++++++++++++++++------------------- > 1 file changed, 51 insertions(+), 33
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests
2016 Jun 17
0
[PATCH net-next V2] tun: introduce tx skb ring
On Wed, Jun 15, 2016 at 04:38:17PM +0800, Jason Wang wrote: > We used to queue tx packets in sk_receive_queue, this is less > efficient since it requires spinlocks to synchronize between producer > and consumer. > > This patch tries to address this by: > > - introduce a new mode which will be only enabled with IFF_TX_ARRAY > set and switch from sk_receive_queue to a
2018 Sep 06
1
[PATCH net-next 01/11] net: sock: introduce SOCK_XDP
On Thu, Sep 06, 2018 at 12:05:16PM +0800, Jason Wang wrote: > This patch introduces a new sock flag - SOCK_XDP. This will be used > for notifying the upper layer that XDP program is attached on the > lower socket, and requires for extra headroom. > > TUN will be the first user. > > Signed-off-by: Jason Wang <jasowang at redhat.com> In fact vhost is the 1st user,
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests