Displaying 20 results from an estimated 20000 matches similar to: "[net-next rfc v7 0/3] Multiqueue virtio-net"
2012 Dec 04
3
[PATCH net-next 0/3] Multiqueue support for virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 04
3
[PATCH net-next 0/3] Multiqueue support for virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 05
3
[PATCH net-next v2 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 05
3
[PATCH net-next v2 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version (hope the final version) of multiqueue
(VIRTIO_NET_F_MQ) support in virtio-net driver. All previous comments were
addressed, the work were based on Krishna Kumar's work to let virtio-net use
multiple rx/tx queues to do the packets reception and transmission. Performance
test show the aggregate latency were increased greately but may get some
regression
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version (hope the final version) of multiqueue
(VIRTIO_NET_F_MQ) support in virtio-net driver. All previous comments were
addressed, the work were based on Krishna Kumar's work to let virtio-net use
multiple rx/tx queues to do the packets reception and transmission. Performance
test show the aggregate latency were increased greately but may get some
regression
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Changes from v5:
- Align the implementation with the RFC spec update v4
- Switch the mode between single mode and multiqueue mode without reset
- Remove the 256 limitation
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Changes from v5:
- Align the implementation with the RFC spec update v4
- Switch the mode between single mode and multiqueue mode without reset
- Remove the 256 limitation
2012 Jun 25
8
[net-next RFC V4 PATCH 0/4] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2012 Jun 25
8
[net-next RFC V4 PATCH 0/4] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2012 Jul 05
14
[net-next RFC V5 0/5] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2012 Jul 05
14
[net-next RFC V5 0/5] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers,
current single queue based tap can not satisfy the
requirement of scaling guest network performance as the
numbers of vcpus increase. So the following series
implements multiple queue support in tun/tap.
In order to take advantages of this, a multi-queue capable
driver and qemu were also needed. I just rebase the latest
version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers,
current single queue based tap can not satisfy the
requirement of scaling guest network performance as the
numbers of vcpus increase. So the following series
implements multiple queue support in tun/tap.
In order to take advantages of this, a multi-queue capable
driver and qemu were also needed. I just rebase the latest
version of
2011 Nov 11
10
[RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net
This patch series resurrects the earlier multiple TX/RX queues
functionality for virtio_net, and addresses the issues pointed
out. It also includes an API to share irq's, f.e. amongst the
TX vqs.
I plan to run TCP/UDP STREAM and RR tests for local->host and
local->remote, and send the results in the next couple of days.
patch #1: Introduce VIRTIO_NET_F_MULTIQUEUE
patch #2: Move
2011 Nov 11
10
[RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net
This patch series resurrects the earlier multiple TX/RX queues
functionality for virtio_net, and addresses the issues pointed
out. It also includes an API to share irq's, f.e. amongst the
TX vqs.
I plan to run TCP/UDP STREAM and RR tests for local->host and
local->remote, and send the results in the next couple of days.
patch #1: Introduce VIRTIO_NET_F_MULTIQUEUE
patch #2: Move
2017 Apr 02
5
[PATCH net-next 0/3] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com>
Add napi for virtio-net transmit completion processing.
Based on previous patchsets by Jason Wang:
[RFC V7 PATCH 0/7] enable tx interrupts for virtio-net
http://lkml.iu.edu/hypermail/linux/kernel/1505.3/00245.html
Changes:
RFC -> v1:
- dropped vhost interrupt moderation patch:
not needed and likely expensive at light
2017 Apr 02
5
[PATCH net-next 0/3] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com>
Add napi for virtio-net transmit completion processing.
Based on previous patchsets by Jason Wang:
[RFC V7 PATCH 0/7] enable tx interrupts for virtio-net
http://lkml.iu.edu/hypermail/linux/kernel/1505.3/00245.html
Changes:
RFC -> v1:
- dropped vhost interrupt moderation patch:
not needed and likely expensive at light
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation