search for: tfiles

Displaying 20 results from an estimated 158 matches for "tfiles".

Did you mean: files
2016 Jun 17
0
[PATCH net-next V2] tun: introduce tx skb ring
...mp; tun->flags & IFF_TX_ARRAY) > + skb_array_cleanup(&tfile->tx_array); > sock_put(&tfile->sk); > } > } > @@ -596,12 +608,12 @@ static void tun_detach_all(struct net_device *dev) > for (i = 0; i < n; i++) { > tfile = rtnl_dereference(tun->tfiles[i]); > /* Drop read queue */ > - tun_queue_purge(tfile); > + tun_queue_purge(tun, tfile); > sock_put(&tfile->sk); > } > list_for_each_entry_safe(tfile, tmp, &tun->disabled, next) { > tun_enable_queue(tfile); > - tun_queue_purge(tfile); > + t...
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
...n->dev); } + if (tun && tun->flags & IFF_TX_ARRAY) + skb_array_cleanup(&tfile->tx_array); sock_put(&tfile->sk); } } @@ -596,12 +608,12 @@ static void tun_detach_all(struct net_device *dev) for (i = 0; i < n; i++) { tfile = rtnl_dereference(tun->tfiles[i]); /* Drop read queue */ - tun_queue_purge(tfile); + tun_queue_purge(tun, tfile); sock_put(&tfile->sk); } list_for_each_entry_safe(tfile, tmp, &tun->disabled, next) { tun_enable_queue(tfile); - tun_queue_purge(tfile); + tun_queue_purge(tun, tfile); sock_put(&...
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
...n->dev); } + if (tun && tun->flags & IFF_TX_ARRAY) + skb_array_cleanup(&tfile->tx_array); sock_put(&tfile->sk); } } @@ -596,12 +608,12 @@ static void tun_detach_all(struct net_device *dev) for (i = 0; i < n; i++) { tfile = rtnl_dereference(tun->tfiles[i]); /* Drop read queue */ - tun_queue_purge(tfile); + tun_queue_purge(tun, tfile); sock_put(&tfile->sk); } list_for_each_entry_safe(tfile, tmp, &tun->disabled, next) { tun_enable_queue(tfile); - tun_queue_purge(tfile); + tun_queue_purge(tun, tfile); sock_put(&...
2016 Jun 30
0
[PATCH net-next V3 6/6] tun: switch to use skb array for tx
...t;dev; + struct tun_file *tfile; + struct skb_array **arrays; + int n = tun->numqueues + tun->numdisabled; + int ret, i; + + arrays = kmalloc(sizeof *arrays * n, GFP_KERNEL); + if (!arrays) + return -ENOMEM; + + for (i = 0; i < tun->numqueues; i++) { + tfile = rtnl_dereference(tun->tfiles[i]); + arrays[i] = &tfile->tx_array; + } + list_for_each_entry(tfile, &tun->disabled, next) + arrays[i++] = &tfile->tx_array; + + ret = skb_array_resize_multiple(arrays, n, + dev->tx_queue_len, GFP_KERNEL); + + kfree(arrays); + return ret; +} + +static int tun_device_e...
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2015 Nov 06
2
corrupt PACKAGES.gz?
Is it just me, or did a corrupt PACKAGES.gz file get installed in the bin/windows/contrib/3.2 directory of CRAN mirrors recently? gzfile() complains about it and Cygwin's gzip cannot decompress it. I tried the following repos <- "https://cran.rstudio.com" v <- "3.2" pkgs.gz <- paste(sep="/", repos, "bin/windows/contrib", v,
2018 Sep 06
1
[PATCH net-next 01/11] net: sock: introduce SOCK_XDP
...bpf_prog *old_prog; > + int i; > > old_prog = rtnl_dereference(tun->xdp_prog); > rcu_assign_pointer(tun->xdp_prog, prog); > if (old_prog) > bpf_prog_put(old_prog); > > + for (i = 0; i < tun->numqueues; i++) { > + tfile = rtnl_dereference(tun->tfiles[i]); > + if (prog) > + sock_set_flag(&tfile->sk, SOCK_XDP); > + else > + sock_reset_flag(&tfile->sk, SOCK_XDP); > + } > + list_for_each_entry(tfile, &tun->disabled, next) { > + if (prog) > + sock_set_flag(&tfile->sk, SOCK_XDP); > + el...
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote: > On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote: > > > > So how about this? We replace the dev destructor with our own that > > doesn't immediately call free_netdev. We only call free_netdev once > > all tun fd's attached to the device have been closed. > > Here's the patch.
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote: > On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote: > > > > So how about this? We replace the dev destructor with our own that > > doesn't immediately call free_netdev. We only call free_netdev once > > all tun fd's attached to the device have been closed. > > Here's the patch.
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests
2004 Sep 17
0
[Bug 1791] New: using --delete with single directory mirroring doesn't delete files
https://bugzilla.samba.org/show_bug.cgi?id=1791 Summary: using --delete with single directory mirroring doesn't delete files Product: rsync Version: 2.6.3 Platform: x86 OS/Version: Linux Status: NEW Severity: major Priority: P3 Component: core AssignedTo: wayned@samba.org
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%