Displaying 20 results from an estimated 21 matches for "tx_count".
2014 Apr 18
3
[PATCH] virtio_net: zero is an invald queue_pairs number
...n(+), 1 deletion(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7b68746..8a852b5 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1285,7 +1285,7 @@ static int virtnet_set_channels(struct net_device *dev,
if (channels->rx_count || channels->tx_count || channels->other_count)
return -EINVAL;
- if (queue_pairs > vi->max_queue_pairs)
+ if (queue_pairs > vi->max_queue_pairs || queue_pairs == 0)
return -EINVAL;
get_online_cpus();
--
1.9.0
2014 Apr 18
3
[PATCH] virtio_net: zero is an invald queue_pairs number
...n(+), 1 deletion(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7b68746..8a852b5 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1285,7 +1285,7 @@ static int virtnet_set_channels(struct net_device *dev,
if (channels->rx_count || channels->tx_count || channels->other_count)
return -EINVAL;
- if (queue_pairs > vi->max_queue_pairs)
+ if (queue_pairs > vi->max_queue_pairs || queue_pairs == 0)
return -EINVAL;
get_online_cpus();
--
1.9.0
2013 Jun 12
26
Interesting observation with network event notification and batching
...average slots
per batch 25.
After the hack, adds in 1024 HYPERVISOR_xen_version (it just does context
switch into hypervisor) in Tx path, throughput: 4.4 Gb/s, average slots
per batch 25.
Average slots per batch is calculate as followed:
1. count total_slots processed from start of day
2. count tx_count which is the number of tx_action function gets
invoked
3. avg_slots_per_tx = total_slots / tx_count
The counter-intuition figures imply that there is something wrong with
the currently batching mechanism. Probably we need to fine-tune the
batching behavior for network and play with event poin...
2014 Apr 21
1
[PATCH] virtio_net: zero is an invald queue_pairs number
.../virtio_net.c b/drivers/net/virtio_net.c
>> index 7b68746..8a852b5 100644
>> --- a/drivers/net/virtio_net.c
>> +++ b/drivers/net/virtio_net.c
>> @@ -1285,7 +1285,7 @@ static int virtnet_set_channels(struct net_device *dev,
>> if (channels->rx_count || channels->tx_count || channels->other_count)
>> return -EINVAL;
>>
>> - if (queue_pairs > vi->max_queue_pairs)
>> + if (queue_pairs > vi->max_queue_pairs || queue_pairs == 0)
>> return -EINVAL;
>>
>> get_online_cpus();
> Acked-by: Jason Wang <...
2014 Apr 21
1
[PATCH] virtio_net: zero is an invald queue_pairs number
.../virtio_net.c b/drivers/net/virtio_net.c
>> index 7b68746..8a852b5 100644
>> --- a/drivers/net/virtio_net.c
>> +++ b/drivers/net/virtio_net.c
>> @@ -1285,7 +1285,7 @@ static int virtnet_set_channels(struct net_device *dev,
>> if (channels->rx_count || channels->tx_count || channels->other_count)
>> return -EINVAL;
>>
>> - if (queue_pairs > vi->max_queue_pairs)
>> + if (queue_pairs > vi->max_queue_pairs || queue_pairs == 0)
>> return -EINVAL;
>>
>> get_online_cpus();
> Acked-by: Jason Wang <...
2014 Apr 21
0
[PATCH] virtio_net: zero is an invald queue_pairs number
...--git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 7b68746..8a852b5 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1285,7 +1285,7 @@ static int virtnet_set_channels(struct net_device *dev,
> if (channels->rx_count || channels->tx_count || channels->other_count)
> return -EINVAL;
>
> - if (queue_pairs > vi->max_queue_pairs)
> + if (queue_pairs > vi->max_queue_pairs || queue_pairs == 0)
> return -EINVAL;
>
> get_online_cpus();
Acked-by: Jason Wang <jasowang at redhat.com>
2014 Jan 07
0
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...;
+ u16 old_queue_pairs = vi->curr_queue_pairs;
+ int err, i;
/* We don't support separate rx/tx channels.
* We don't allow setting 'other' channels.
@@ -1288,14 +1426,21 @@ static int virtnet_set_channels(struct net_device *dev,
if (channels->rx_count || channels->tx_count || channels->other_count)
return -EINVAL;
- if (queue_pairs > vi->max_queue_pairs)
+ if (new_queue_pairs > vi->max_queue_pairs)
return -EINVAL;
get_online_cpus();
- err = virtnet_set_queues(vi, queue_pairs);
+ err = virtnet_set_queues(vi, new_queue_pairs);
if (!err) {
-...
2012 Nov 27
4
[net-next rfc v7 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Nov 27
4
[net-next rfc v7 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 04
3
[PATCH net-next 0/3] Multiqueue support for virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 04
3
[PATCH net-next 0/3] Multiqueue support for virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 05
3
[PATCH net-next v2 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 05
3
[PATCH net-next v2 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version (hope the final version) of multiqueue
(VIRTIO_NET_F_MQ) support in virtio-net driver. All previous comments were
addressed, the work were based on Krishna Kumar's work to let virtio-net use
multiple rx/tx queues to do the packets reception and transmission. Performance
test show the aggregate latency were increased greately but may get some
regression
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version (hope the final version) of multiqueue
(VIRTIO_NET_F_MQ) support in virtio-net driver. All previous comments were
addressed, the work were based on Krishna Kumar's work to let virtio-net use
multiple rx/tx queues to do the packets reception and transmission. Performance
test show the aggregate latency were increased greately but may get some
regression
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2012 Jul 05
14
[net-next RFC V5 0/5] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2012 Jul 05
14
[net-next RFC V5 0/5] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Changes from v5:
- Align the implementation with the RFC spec update v4
- Switch the mode between single mode and multiqueue mode without reset
- Remove the 256 limitation