search for: tun_ethtool_op

Displaying 19 results from an estimated 19 matches for "tun_ethtool_op".

Did you mean: tun_ethtool_ops
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...struct ethtool_coalesce *ec) +{ + struct tun_struct *tun = netdev_priv(dev); + + if (ec->rx_max_coalesced_frames > NAPI_POLL_WEIGHT) + tun->rx_batched = NAPI_POLL_WEIGHT; + else + tun->rx_batched = ec->rx_max_coalesced_frames; + + return 0; +} + static const struct ethtool_ops tun_ethtool_ops = { .get_settings = tun_get_settings, .get_drvinfo = tun_get_drvinfo, @@ -2445,6 +2507,8 @@ static const struct ethtool_ops tun_ethtool_ops = { .set_msglevel = tun_set_msglevel, .get_link = ethtool_op_get_link, .get_ts_info = ethtool_op_get_ts_info, + .get_coalesce = tun_get_coalesce,...
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
...(struct net_device *dev, + struct ethtool_coalesce *ec) +{ + struct tun_struct *tun = netdev_priv(dev); + + if (ec->rx_max_coalesced_frames > NAPI_POLL_WEIGHT) + return -EINVAL; + + tun->rx_batched = ec->rx_max_coalesced_frames; + + return 0; +} + static const struct ethtool_ops tun_ethtool_ops = { .get_settings = tun_get_settings, .get_drvinfo = tun_get_drvinfo, @@ -2446,6 +2508,8 @@ static const struct ethtool_ops tun_ethtool_ops = { .set_msglevel = tun_set_msglevel, .get_link = ethtool_op_get_link, .get_ts_info = ethtool_op_get_ts_info, + .get_coalesce = tun_get_coalesce,...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...return -EINVAL; So what should userspace do? Keep trying until it succeeds? I think it's better to just use NAPI_POLL_WEIGHT instead and DTRT here. > + > + tun->rx_batched = ec->rx_max_coalesced_frames; > + > + return 0; > +} > + > static const struct ethtool_ops tun_ethtool_ops = { > .get_settings = tun_get_settings, > .get_drvinfo = tun_get_drvinfo, > @@ -2446,6 +2508,8 @@ static const struct ethtool_ops tun_ethtool_ops = { > .set_msglevel = tun_set_msglevel, > .get_link = ethtool_op_get_link, > .get_ts_info = ethtool_op_get_ts_info, > + ....
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...return -EINVAL; So what should userspace do? Keep trying until it succeeds? I think it's better to just use NAPI_POLL_WEIGHT instead and DTRT here. > + > + tun->rx_batched = ec->rx_max_coalesced_frames; > + > + return 0; > +} > + > static const struct ethtool_ops tun_ethtool_ops = { > .get_settings = tun_get_settings, > .get_drvinfo = tun_get_drvinfo, > @@ -2446,6 +2508,8 @@ static const struct ethtool_ops tun_ethtool_ops = { > .set_msglevel = tun_set_msglevel, > .get_link = ethtool_op_get_link, > .get_ts_info = ethtool_op_get_ts_info, > + ....
2016 Jun 30
0
[PATCH net-next V3 6/6] tun: switch to use skb array for tx
...p;tfile->tx_array); + tun_put(tun); + + return ret; +} + /* Ops structure to mimic raw sockets with tun */ static const struct proto_ops tun_socket_ops = { + .peek_len = tun_peek_len, .sendmsg = tun_sendmsg, .recvmsg = tun_recvmsg, }; @@ -2397,6 +2469,53 @@ static const struct ethtool_ops tun_ethtool_ops = { .get_ts_info = ethtool_op_get_ts_info, }; +static int tun_queue_resize(struct tun_struct *tun) +{ + struct net_device *dev = tun->dev; + struct tun_file *tfile; + struct skb_array **arrays; + int n = tun->numqueues + tun->numdisabled; + int ret, i; + + arrays = kmalloc(sizeof *ar...
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote: > On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote: > > > > So how about this? We replace the dev destructor with our own that > > doesn't immediately call free_netdev. We only call free_netdev once > > all tun fd's attached to the device have been closed. > > Here's the patch.
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
On Thu, Apr 16, 2009 at 01:08:18AM -0000, Herbert Xu wrote: > On Wed, Apr 15, 2009 at 10:38:34PM +0800, Herbert Xu wrote: > > > > So how about this? We replace the dev destructor with our own that > > doesn't immediately call free_netdev. We only call free_netdev once > > all tun fd's attached to the device have been closed. > > Here's the patch.
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2011 Dec 05
8
[net-next RFC PATCH 0/5] Series short description
multiple queue virtio-net: flow steering through host/guest cooperation Hello all: This is a rough series adds the guest/host cooperation of flow steering support based on Krish Kumar's multiple queue virtio-net driver patch 3/3 (http://lwn.net/Articles/467283/). This idea is simple, the backend pass the rxhash to the guest and guest would tell the backend the hash to queue mapping when
2011 Dec 05
8
[net-next RFC PATCH 0/5] Series short description
multiple queue virtio-net: flow steering through host/guest cooperation Hello all: This is a rough series adds the guest/host cooperation of flow steering support based on Krish Kumar's multiple queue virtio-net driver patch 3/3 (http://lwn.net/Articles/467283/). This idea is simple, the backend pass the rxhash to the guest and guest would tell the backend the hash to queue mapping when
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of