Displaying 14 results from an estimated 14 matches for "to_dev".
Did you mean:
to_vdev
2012 Oct 12
13
Dom0 physical networking/swiotlb/something issue in 3.7-rc1
Hi Konrad,
The following patch causes fairly large packet loss when transmitting
from dom0 to the physical network, at least with my tg3 hardware, but I
assume it can impact anything which uses this interface.
I suspect that the issue is that the compound pages allocated in this
way are not backed by contiguous mfns and so things fall apart when the
driver tries to do DMA.
However I
2014 Jan 16
0
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
...igned int txq);
-#ifdef CONFIG_RPS
+#ifdef CONFIG_SYSFS
int netif_set_real_num_rx_queues(struct net_device *dev, unsigned int rxq);
#else
static inline int netif_set_real_num_rx_queues(struct net_device *dev,
@@ -2393,7 +2408,7 @@ static inline int netif_copy_real_num_queues(struct net_device *to_dev,
from_dev->real_num_tx_queues);
if (err)
return err;
-#ifdef CONFIG_RPS
+#ifdef CONFIG_SYSFS
return netif_set_real_num_rx_queues(to_dev,
from_dev->real_num_rx_queues);
#else
@@ -2401,6 +2416,23 @@ static inline int netif_copy_real_num_queues(struct net_device *to_de...
2014 Jan 16
0
[PATCH net-next v4 4/6] net-sysfs: add support for device-specific rx queue sysfs attributes
...igned int txq);
-#ifdef CONFIG_RPS
+#ifdef CONFIG_SYSFS
int netif_set_real_num_rx_queues(struct net_device *dev, unsigned int rxq);
#else
static inline int netif_set_real_num_rx_queues(struct net_device *dev,
@@ -2393,7 +2408,7 @@ static inline int netif_copy_real_num_queues(struct net_device *to_dev,
from_dev->real_num_tx_queues);
if (err)
return err;
-#ifdef CONFIG_RPS
+#ifdef CONFIG_SYSFS
return netif_set_real_num_rx_queues(to_dev,
from_dev->real_num_rx_queues);
#else
@@ -2401,6 +2416,18 @@ static inline int netif_copy_real_num_queues(struct net_device *to_de...
2014 Jan 16
2
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
On Thu, 2014-01-16 at 01:38 -0800, Michael Dalton wrote:
[...]
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
[...]
> @@ -2401,6 +2416,23 @@ static inline int netif_copy_real_num_queues(struct net_device *to_dev,
> #endif
> }
>
> +#ifdef CONFIG_SYSFS
> +static inline unsigned int get_netdev_rx_queue_index(
> + struct netdev_rx_queue *queue)
> +{
> + struct net_device *dev = queue->dev;
> + int i;
> +
> + for (i = 0; i < dev->num_rx_queues; i++)
> + if (que...
2014 Jan 16
2
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
On Thu, 2014-01-16 at 01:38 -0800, Michael Dalton wrote:
[...]
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
[...]
> @@ -2401,6 +2416,23 @@ static inline int netif_copy_real_num_queues(struct net_device *to_dev,
> #endif
> }
>
> +#ifdef CONFIG_SYSFS
> +static inline unsigned int get_netdev_rx_queue_index(
> + struct netdev_rx_queue *queue)
> +{
> + struct net_device *dev = queue->dev;
> + int i;
> +
> + for (i = 0; i < dev->num_rx_queues; i++)
> + if (que...
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2008 Jul 07
3
[Bridge] [RFC PATCH 0/2] Allow full bridge configuration via sysfs
Right now, you can configure most bridge device parameters via sysfs.
However, you cannot either:
- add or remove bridge interfaces
- add or remove physical interfaces from a bridge
The attached patch set rectifies this. With this patch set, brctl
(theoretically) becomes completely optional, much like ifenslave is
now for bonding. (In fact, the idea for this patch, and the syntax
used herein, is