search for: num_queue_pairs

Displaying 20 results from an estimated 20 matches for "num_queue_pairs".

2012 Jul 05
14
[net-next RFC V5 0/5] Multiqueue virtio-net
Hello All: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. Test Environment: - Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes - Two directed connected 82599 Test Summary: - Highlights: huge improvements on TCP_RR
2012 Jul 05
14
[net-next RFC V5 0/5] Multiqueue virtio-net
Hello All: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. Test Environment: - Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes - Two directed connected 82599 Test Summary: - Highlights: huge improvements on TCP_RR
2012 Jun 25
8
[net-next RFC V4 PATCH 0/4] Multiqueue virtio-net
Hello All: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. Test Environment: - Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes - Two directed connected 82599 Test Summary: - Highlights: huge improvements on TCP_RR
2012 Jun 25
8
[net-next RFC V4 PATCH 0/4] Multiqueue virtio-net
Hello All: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. Test Environment: - Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes - Two directed connected 82599 Test Summary: - Highlights: huge improvements on TCP_RR
2011 Nov 11
10
[RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net
...2: Move 'num_queues' to virtqueue patch #3: virtio_net driver changes patch #4: vhost_net changes patch #5: Implement find_vqs_irq() patch #6: Convert virtio_net driver to use find_vqs_irq() Changes from rev2: Michael: ------- 1. Added functions to handle setting RX/TX/CTRL vq's. 2. num_queue_pairs instead of numtxqs. 3. Experimental support for fewer irq's in find_vqs. Rusty: ------ 4. Cleaned up some existing "while (1)". 5. rvq/svq and rx_sg/tx_sg changed to vq and sg respectively. 6. Cleaned up some "#if 1" code. Issue when using patch5: ------------------------...
2011 Nov 11
10
[RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net
...2: Move 'num_queues' to virtqueue patch #3: virtio_net driver changes patch #4: vhost_net changes patch #5: Implement find_vqs_irq() patch #6: Convert virtio_net driver to use find_vqs_irq() Changes from rev2: Michael: ------- 1. Added functions to handle setting RX/TX/CTRL vq's. 2. num_queue_pairs instead of numtxqs. 3. Experimental support for fewer irq's in find_vqs. Rusty: ------ 4. Cleaned up some existing "while (1)". 5. rvq/svq and rx_sg/tx_sg changed to vq and sg respectively. 6. Cleaned up some "#if 1" code. Issue when using patch5: ------------------------...
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. Changes from v5: - Align the implementation with the RFC spec update v4 - Switch the mode between single mode and multiqueue mode without reset - Remove the 256 limitation
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. Changes from v5: - Align the implementation with the RFC spec update v4 - Switch the mode between single mode and multiqueue mode without reset - Remove the 256 limitation
2012 Jun 26
0
[rfc] virtio-spec: introduce VIRTIO_NET_F_MULTIQUEUE
...s set. + +\change_unchanged \begin_inset listings inline false @@ -4076,6 +4160,17 @@ struct virtio_net_config { \begin_layout Plain Layout u16 status; +\change_inserted 2090695081 1340692955 + +\end_layout + +\begin_layout Plain Layout + +\change_inserted 2090695081 1340692962 + + u16 num_queue_pairs; +\change_unchanged + \end_layout \begin_layout Plain Layout @@ -4527,7 +4622,7 @@ O features are used, the Guest will need to accept packets of up to 65550 So unless VIRTIO_NET_F_MRG_RXBUF is negotiated, every buffer in the receive queue needs to be at least this length \begin_inset Foot...
2012 Jun 26
0
[rfc] virtio-spec: introduce VIRTIO_NET_F_MULTIQUEUE
...s set. + +\change_unchanged \begin_inset listings inline false @@ -4076,6 +4160,17 @@ struct virtio_net_config { \begin_layout Plain Layout u16 status; +\change_inserted 2090695081 1340692955 + +\end_layout + +\begin_layout Plain Layout + +\change_inserted 2090695081 1340692962 + + u16 num_queue_pairs; +\change_unchanged + \end_layout \begin_layout Plain Layout @@ -4527,7 +4622,7 @@ O features are used, the Guest will need to accept packets of up to 65550 So unless VIRTIO_NET_F_MRG_RXBUF is negotiated, every buffer in the receive queue needs to be at least this length \begin_inset Foot...
2011 Dec 05
8
[net-next RFC PATCH 0/5] Series short description
multiple queue virtio-net: flow steering through host/guest cooperation Hello all: This is a rough series adds the guest/host cooperation of flow steering support based on Krish Kumar's multiple queue virtio-net driver patch 3/3 (http://lwn.net/Articles/467283/). This idea is simple, the backend pass the rxhash to the guest and guest would tell the backend the hash to queue mapping when
2011 Dec 05
8
[net-next RFC PATCH 0/5] Series short description
multiple queue virtio-net: flow steering through host/guest cooperation Hello all: This is a rough series adds the guest/host cooperation of flow steering support based on Krish Kumar's multiple queue virtio-net driver patch 3/3 (http://lwn.net/Articles/467283/). This idea is simple, the backend pass the rxhash to the guest and guest would tell the backend the hash to queue mapping when
2017 Feb 08
0
FW: Question about /patch/9251925/
...en(tx_buffer, len), DMA_TO_DEVICE); } ...... } *RX:* i40e_vsi_configure() ->i40e_vsi_configure_rx() { ...... /* set up individual rings */ for (i = 0; i < vsi->num_queue_pairs && !err; i++) err = i40e_configure_rx_ring(vsi->rx_rings[i]); ...... } ->i40e_configure_rx_ring() ->i40e_alloc_rx_buffers() { ...... do { if (!i40e_alloc_mapped_page...
2017 Feb 08
0
FW: Question about /patch/9251925/
...en(tx_buffer, len), DMA_TO_DEVICE); } ...... } *RX:* i40e_vsi_configure() ->i40e_vsi_configure_rx() { ...... /* set up individual rings */ for (i = 0; i < vsi->num_queue_pairs && !err; i++) err = i40e_configure_rx_ring(vsi->rx_rings[i]); ...... } ->i40e_configure_rx_ring() ->i40e_alloc_rx_buffers() { ...... do { if (!i40e_alloc_mapped_page...
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
Hello all: This seires is an update of last version of multiqueue support to add multiqueue capability to both tap and virtio-net. Some kinds of tap backends has (macvatp in linux) or would (tap) support multiqueue. In such kind of tap backend, each file descriptor of a tap is a qeueu and ioctls were prodived to attach an exist tap file descriptor to the tun/tap device. So the patch let qemu to
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
Hello all: This seires is an update of last version of multiqueue support to add multiqueue capability to both tap and virtio-net. Some kinds of tap backends has (macvatp in linux) or would (tap) support multiqueue. In such kind of tap backend, each file descriptor of a tap is a qeueu and ioctls were prodived to attach an exist tap file descriptor to the tun/tap device. So the patch let qemu to
2017 Jan 05
3
[PATCH net-next] net: make ndo_get_stats64 a void function
...g *tx_ring, *rx_ring; @@ -426,10 +422,10 @@ static struct rtnl_link_stats64 *i40e_get_netdev_stats_struct( int i; if (test_bit(__I40E_DOWN, &vsi->state)) - return stats; + return; if (!vsi->tx_rings) - return stats; + return; rcu_read_lock(); for (i = 0; i < vsi->num_queue_pairs; i++) { @@ -469,8 +465,6 @@ static struct rtnl_link_stats64 *i40e_get_netdev_stats_struct( stats->rx_dropped = vsi_stats->rx_dropped; stats->rx_crc_errors = vsi_stats->rx_crc_errors; stats->rx_length_errors = vsi_stats->rx_length_errors; - - return stats; } /** diff --gi...
2017 Jan 05
3
[PATCH net-next] net: make ndo_get_stats64 a void function
...g *tx_ring, *rx_ring; @@ -426,10 +422,10 @@ static struct rtnl_link_stats64 *i40e_get_netdev_stats_struct( int i; if (test_bit(__I40E_DOWN, &vsi->state)) - return stats; + return; if (!vsi->tx_rings) - return stats; + return; rcu_read_lock(); for (i = 0; i < vsi->num_queue_pairs; i++) { @@ -469,8 +465,6 @@ static struct rtnl_link_stats64 *i40e_get_netdev_stats_struct( stats->rx_dropped = vsi_stats->rx_dropped; stats->rx_crc_errors = vsi_stats->rx_crc_errors; stats->rx_length_errors = vsi_stats->rx_length_errors; - - return stats; } /** diff --gi...