Displaying 20 results from an estimated 24 matches for "rx_max_pending".
Did you mean:
tx_max_pending
2012 Nov 27
4
[net-next rfc v7 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Nov 27
4
[net-next rfc v7 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 04
3
[PATCH net-next 0/3] Multiqueue support for virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 04
3
[PATCH net-next 0/3] Multiqueue support for virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 05
3
[PATCH net-next v2 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 05
3
[PATCH net-next v2 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version (hope the final version) of multiqueue
(VIRTIO_NET_F_MQ) support in virtio-net driver. All previous comments were
addressed, the work were based on Krishna Kumar's work to let virtio-net use
multiple rx/tx queues to do the packets reception and transmission. Performance
test show the aggregate latency were increased greately but may get some
regression
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
Hi all:
This series is an update version (hope the final version) of multiqueue
(VIRTIO_NET_F_MQ) support in virtio-net driver. All previous comments were
addressed, the work were based on Krishna Kumar's work to let virtio-net use
multiple rx/tx queues to do the packets reception and transmission. Performance
test show the aggregate latency were increased greately but may get some
regression
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Changes from v5:
- Align the implementation with the RFC spec update v4
- Switch the mode between single mode and multiqueue mode without reset
- Remove the 256 limitation
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Changes from v5:
- Align the implementation with the RFC spec update v4
- Switch the mode between single mode and multiqueue mode without reset
- Remove the 256 limitation
2012 Jun 25
8
[net-next RFC V4 PATCH 0/4] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2012 Jun 25
8
[net-next RFC V4 PATCH 0/4] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2012 Jul 05
14
[net-next RFC V5 0/5] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2012 Jul 05
14
[net-next RFC V5 0/5] Multiqueue virtio-net
Hello All:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
Test Environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
Test Summary:
- Highlights: huge improvements on TCP_RR
2011 Nov 11
10
[RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net
This patch series resurrects the earlier multiple TX/RX queues
functionality for virtio_net, and addresses the issues pointed
out. It also includes an API to share irq's, f.e. amongst the
TX vqs.
I plan to run TCP/UDP STREAM and RR tests for local->host and
local->remote, and send the results in the next couple of days.
patch #1: Introduce VIRTIO_NET_F_MULTIQUEUE
patch #2: Move
2011 Nov 11
10
[RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net
This patch series resurrects the earlier multiple TX/RX queues
functionality for virtio_net, and addresses the issues pointed
out. It also includes an API to share irq's, f.e. amongst the
TX vqs.
I plan to run TCP/UDP STREAM and RR tests for local->host and
local->remote, and send the results in the next couple of days.
patch #1: Introduce VIRTIO_NET_F_MULTIQUEUE
patch #2: Move
2009 Oct 06
1
[PATCH 2.6.32-rc3] net: VMware virtual Ethernet NIC driver: vmxnet3
...ed;
+ ecmd->duplex = DUPLEX_FULL;
+ } else {
+ ecmd->speed = -1;
+ ecmd->duplex = -1;
+ }
+ return 0;
+}
+
+
+static void
+vmxnet3_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *param)
+{
+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+
+ param->rx_max_pending = VMXNET3_RX_RING_MAX_SIZE;
+ param->tx_max_pending = VMXNET3_TX_RING_MAX_SIZE;
+ param->rx_mini_max_pending = 0;
+ param->rx_jumbo_max_pending = 0;
+
+ param->rx_pending = adapter->rx_queue.rx_ring[0].size;
+ param->tx_pending = adapter->tx_queue.tx_ring.size;
+ param->rx_m...
2009 Oct 06
1
[PATCH 2.6.32-rc3] net: VMware virtual Ethernet NIC driver: vmxnet3
...ed;
+ ecmd->duplex = DUPLEX_FULL;
+ } else {
+ ecmd->speed = -1;
+ ecmd->duplex = -1;
+ }
+ return 0;
+}
+
+
+static void
+vmxnet3_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *param)
+{
+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+
+ param->rx_max_pending = VMXNET3_RX_RING_MAX_SIZE;
+ param->tx_max_pending = VMXNET3_TX_RING_MAX_SIZE;
+ param->rx_mini_max_pending = 0;
+ param->rx_jumbo_max_pending = 0;
+
+ param->rx_pending = adapter->rx_queue.rx_ring[0].size;
+ param->tx_pending = adapter->tx_queue.tx_ring.size;
+ param->rx_m...
2009 Oct 12
1
[PATCH 2.6.32-rc4] net: VMware virtual Ethernet NIC driver: vmxnet3
...ed;
+ ecmd->duplex = DUPLEX_FULL;
+ } else {
+ ecmd->speed = -1;
+ ecmd->duplex = -1;
+ }
+ return 0;
+}
+
+
+static void
+vmxnet3_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *param)
+{
+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+
+ param->rx_max_pending = VMXNET3_RX_RING_MAX_SIZE;
+ param->tx_max_pending = VMXNET3_TX_RING_MAX_SIZE;
+ param->rx_mini_max_pending = 0;
+ param->rx_jumbo_max_pending = 0;
+
+ param->rx_pending = adapter->rx_queue.rx_ring[0].size;
+ param->tx_pending = adapter->tx_queue.tx_ring.size;
+ param->rx_m...
2009 Oct 12
1
[PATCH 2.6.32-rc4] net: VMware virtual Ethernet NIC driver: vmxnet3
...ed;
+ ecmd->duplex = DUPLEX_FULL;
+ } else {
+ ecmd->speed = -1;
+ ecmd->duplex = -1;
+ }
+ return 0;
+}
+
+
+static void
+vmxnet3_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *param)
+{
+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+
+ param->rx_max_pending = VMXNET3_RX_RING_MAX_SIZE;
+ param->tx_max_pending = VMXNET3_TX_RING_MAX_SIZE;
+ param->rx_mini_max_pending = 0;
+ param->rx_jumbo_max_pending = 0;
+
+ param->rx_pending = adapter->rx_queue.rx_ring[0].size;
+ param->tx_pending = adapter->tx_queue.tx_ring.size;
+ param->rx_m...