search for: tfile

Displaying 20 results from an estimated 158 matches for "tfile".

Did you mean: file
2016 Jun 17
0
[PATCH net-next V2] tun: introduce tx skb ring
...64 rx_packets; > @@ -167,6 +169,7 @@ struct tun_file { > }; > struct list_head next; > struct tun_struct *detached; > + struct skb_array tx_array; > }; > > struct tun_flow_entry { > @@ -513,8 +516,15 @@ static struct tun_struct *tun_enable_queue(struct tun_file *tfile) > return tun; > } > > -static void tun_queue_purge(struct tun_file *tfile) > +static void tun_queue_purge(struct tun_struct *tun, struct tun_file *tfile) > { > + struct sk_buff *skb; > + > + if (tun->flags & IFF_TX_ARRAY) { > + while ((skb = skb_array_co...
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
..._RING_SIZE 256 struct tun_pcpu_stats { u64 rx_packets; @@ -167,6 +169,7 @@ struct tun_file { }; struct list_head next; struct tun_struct *detached; + struct skb_array tx_array; }; struct tun_flow_entry { @@ -513,8 +516,15 @@ static struct tun_struct *tun_enable_queue(struct tun_file *tfile) return tun; } -static void tun_queue_purge(struct tun_file *tfile) +static void tun_queue_purge(struct tun_struct *tun, struct tun_file *tfile) { + struct sk_buff *skb; + + if (tun->flags & IFF_TX_ARRAY) { + while ((skb = skb_array_consume(&tfile->tx_array)) != NULL) + kfree...
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
..._RING_SIZE 256 struct tun_pcpu_stats { u64 rx_packets; @@ -167,6 +169,7 @@ struct tun_file { }; struct list_head next; struct tun_struct *detached; + struct skb_array tx_array; }; struct tun_flow_entry { @@ -513,8 +516,15 @@ static struct tun_struct *tun_enable_queue(struct tun_file *tfile) return tun; } -static void tun_queue_purge(struct tun_file *tfile) +static void tun_queue_purge(struct tun_struct *tun, struct tun_file *tfile) { + struct sk_buff *skb; + + if (tun->flags & IFF_TX_ARRAY) { + while ((skb = skb_array_consume(&tfile->tx_array)) != NULL) + kfree...
2016 Jun 30
0
[PATCH net-next V3 6/6] tun: switch to use skb array for tx
...t;linux/skb_array.h> #include <asm/uaccess.h> @@ -167,6 +168,7 @@ struct tun_file { }; struct list_head next; struct tun_struct *detached; + struct skb_array tx_array; }; struct tun_flow_entry { @@ -515,7 +517,11 @@ static struct tun_struct *tun_enable_queue(struct tun_file *tfile) static void tun_queue_purge(struct tun_file *tfile) { - skb_queue_purge(&tfile->sk.sk_receive_queue); + struct sk_buff *skb; + + while ((skb = skb_array_consume(&tfile->tx_array)) != NULL) + kfree_skb(skb); + skb_queue_purge(&tfile->sk.sk_error_queue); } @@ -560,6 +5...
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2015 Nov 06
2
corrupt PACKAGES.gz?
...RAN mirrors recently? gzfile() complains about it and Cygwin's gzip cannot decompress it. I tried the following repos <- "https://cran.rstudio.com" v <- "3.2" pkgs.gz <- paste(sep="/", repos, "bin/windows/contrib", v, "PACKAGES.gz") tfile <- tempfile(fileext=".gz") download.file(pkgs.gz, dest=tfile) r.gz <- readLines(gzfile(tfile, "r")) tail(system(paste("c:\\cygwin\\bin\\gzip -d - ", shQuote(tfile)), intern=TRUE)) and got > repos <- "https://cran.rstudio.com" > v <- "...
2018 Sep 06
1
[PATCH net-next 01/11] net: sock: introduce SOCK_XDP
...20 insertions(+) > > diff --git a/drivers/net/tun.c b/drivers/net/tun.c > index ebd07ad82431..2c548bd20393 100644 > --- a/drivers/net/tun.c > +++ b/drivers/net/tun.c > @@ -869,6 +869,9 @@ static int tun_attach(struct tun_struct *tun, struct file *file, > tun_napi_init(tun, tfile, napi); > } > > + if (rtnl_dereference(tun->xdp_prog)) > + sock_set_flag(&tfile->sk, SOCK_XDP); > + > tun_set_real_num_queues(tun); > > /* device is allowed to go away first, so no need to hold extra > @@ -1241,13 +1244,29 @@ static int tun_xdp_set(st...
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
...ve been closed. > > This is done by using reference counting the attached tun file > descriptors. The refcount in tun->sk has been reappropriated > for this purpose since it was already being used for that, albeit > from the opposite angle. > > Note that we no longer zero tfile->tun since tun_get will return > NULL anyway after the refcount on tfile hits zero. Instead it > represents whether this device has ever been attached to a device. > > Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au> > > > Cheers, > > > diff --...
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
...ve been closed. > > This is done by using reference counting the attached tun file > descriptors. The refcount in tun->sk has been reappropriated > for this purpose since it was already being used for that, albeit > from the opposite angle. > > Note that we no longer zero tfile->tun since tun_get will return > NULL anyway after the refcount on tfile hits zero. Instead it > represents whether this device has ever been attached to a device. > > Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au> > > > Cheers, > > > diff --...
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
...de <linux/uaccess.h> +static int rx_batched; +module_param(rx_batched, int, 0444); +MODULE_PARM_DESC(rx_batched, "Number of packets batched in rx"); + /* Uncomment to enable debugging */ /* #define TUN_DEBUG 1 */ @@ -522,6 +526,7 @@ static void tun_queue_purge(struct tun_file *tfile) while ((skb = skb_array_consume(&tfile->tx_array)) != NULL) kfree_skb(skb); + skb_queue_purge(&tfile->sk.sk_write_queue); skb_queue_purge(&tfile->sk.sk_error_queue); } @@ -1140,10 +1145,36 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, return skb...
2004 Sep 17
0
[Bug 1791] New: using --delete with single directory mirroring doesn't delete files
...2 root root 4096 Sep 17 15:13 dira drwxr-xr-x 2 root root 4096 Sep 17 15:14 dirb ./dira: total 16 drwxr-xr-x 2 root root 4096 Sep 17 15:13 . drwxr-xr-x 4 root root 4096 Sep 17 15:13 .. -rw-r--r-- 1 root root 16 Sep 17 15:13 tfile.1 -rw-r--r-- 1 root root 16 Sep 17 15:13 tfile.2 ./dirb: total 16 drwxr-xr-x 2 root root 4096 Sep 17 15:14 . drwxr-xr-x 4 root root 4096 Sep 17 15:13 .. -rw-r--r-- 1 root root 16 Sep 17 15:13 tfile.1 -rw-r--r-- 1 root root...
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
...de <linux/uaccess.h> +static int rx_batched; +module_param(rx_batched, int, 0444); +MODULE_PARM_DESC(rx_batched, "Number of packets batched in rx"); + /* Uncomment to enable debugging */ /* #define TUN_DEBUG 1 */ @@ -522,6 +526,7 @@ static void tun_queue_purge(struct tun_file *tfile) while ((skb = skb_array_consume(&tfile->tx_array)) != NULL) kfree_skb(skb); + skb_queue_purge(&tfile->sk.sk_write_queue); skb_queue_purge(&tfile->sk.sk_error_queue); } @@ -1140,10 +1145,44 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, return skb...
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...90ac 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -218,6 +218,7 @@ struct tun_struct { struct list_head disabled; void *security; u32 flow_count; + u32 rx_batched; struct tun_pcpu_stats __percpu *pcpu_stats; }; @@ -522,6 +523,7 @@ static void tun_queue_purge(struct tun_file *tfile) while ((skb = skb_array_consume(&tfile->tx_array)) != NULL) kfree_skb(skb); + skb_queue_purge(&tfile->sk.sk_write_queue); skb_queue_purge(&tfile->sk.sk_error_queue); } @@ -1139,10 +1141,46 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, return skb...
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
...3926 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -218,6 +218,7 @@ struct tun_struct { struct list_head disabled; void *security; u32 flow_count; + u32 rx_batched; struct tun_pcpu_stats __percpu *pcpu_stats; }; @@ -522,6 +523,7 @@ static void tun_queue_purge(struct tun_file *tfile) while ((skb = skb_array_consume(&tfile->tx_array)) != NULL) kfree_skb(skb); + skb_queue_purge(&tfile->sk.sk_write_queue); skb_queue_purge(&tfile->sk.sk_error_queue); } @@ -1140,10 +1142,45 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, return skb...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%