similar to: RFT: virtio_net: limit xmit polling

Displaying 20 results from an estimated 10000 matches similar to: "RFT: virtio_net: limit xmit polling"

2015 Oct 22
1
[PATCH net-next RFC 2/2] vhost_net: basic polling support
On 10/22/2015 02:33 AM, Michael S. Tsirkin wrote: > On Thu, Oct 22, 2015 at 01:27:29AM -0400, Jason Wang wrote: >> This patch tries to poll for new added tx buffer for a while at the >> end of tx processing. The maximum time spent on polling were limited >> through a module parameter. To avoid block rx, the loop will end it >> there's new other works queued on vhost
2015 Oct 22
4
[PATCH net-next RFC 2/2] vhost_net: basic polling support
On Thu, Oct 22, 2015 at 01:27:29AM -0400, Jason Wang wrote: > This patch tries to poll for new added tx buffer for a while at the > end of tx processing. The maximum time spent on polling were limited > through a module parameter. To avoid block rx, the loop will end it > there's new other works queued on vhost so in fact socket receive > queue is also be polled. > >
2015 Oct 22
4
[PATCH net-next RFC 2/2] vhost_net: basic polling support
On Thu, Oct 22, 2015 at 01:27:29AM -0400, Jason Wang wrote: > This patch tries to poll for new added tx buffer for a while at the > end of tx processing. The maximum time spent on polling were limited > through a module parameter. To avoid block rx, the loop will end it > there's new other works queued on vhost so in fact socket receive > queue is also be polled. > >
2011 Jun 09
0
No subject
divide the throughput by the host CPU utilization (measured by something like mpstat). Sometimes throughput doesn't increase (e.g. guest-host) by CPU utilization does decrease. So it's interesting. Another issue is that we are trying to improve the latency of a busy queue here. However STREAM/MAERTS tests ignore the latency (more or less) while TCP_RR by default runs a single packet per
2011 Jun 09
0
No subject
divide the throughput by the host CPU utilization (measured by something like mpstat). Sometimes throughput doesn't increase (e.g. guest-host) by CPU utilization does decrease. So it's interesting. Another issue is that we are trying to improve the latency of a busy queue here. However STREAM/MAERTS tests ignore the latency (more or less) while TCP_RR by default runs a single packet per
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v2 -> v3: - convert __netif_tx_trylock to __netif_tx_lock on tx napi poll ensure that the handler always cleans, to avoid deadlock - unconditionally clean in start_xmit avoid adding an unnecessary "if (use_napi)" branch - remove
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v2 -> v3: - convert __netif_tx_trylock to __netif_tx_lock on tx napi poll ensure that the handler always cleans, to avoid deadlock - unconditionally clean in start_xmit avoid adding an unnecessary "if (use_napi)" branch - remove
2011 May 18
1
[PATCH RFC] virtio_net: fix patch: virtio_net: limit xmit polling
The patch virtio_net: limit xmit polling got the logic reversed: it polled while we had capacity not while ring was empty. Fix it up and clean up a bit by using a for loop. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- OK, turns out that patch was borken. Here's a fix that survived stress test on my box. Pushed on my branch, I'll send a rebased series with Rusty's
2011 May 18
1
[PATCH RFC] virtio_net: fix patch: virtio_net: limit xmit polling
The patch virtio_net: limit xmit polling got the logic reversed: it polled while we had capacity not while ring was empty. Fix it up and clean up a bit by using a for loop. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- OK, turns out that patch was borken. Here's a fix that survived stress test on my box. Pushed on my branch, I'll send a rebased series with Rusty's
2017 Apr 18
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
From: Willem de Bruijn <willemb at google.com> Convert virtio-net to a standard napi tx completion path. This enables better TCP pacing using TCP small queues and increases single stream throughput. The virtio-net driver currently cleans tx descriptors on transmission of new packets in ndo_start_xmit. Latency depends on new traffic, so is unbounded. To avoid deadlock when a socket reaches
2023 Mar 07
2
[PATCH net 0/2] add checking sq is full inside xdp xmit
Hi, On Tue, 2023-03-07 at 09:49 +0800, Xuan Zhuo wrote: > On Mon, 6 Mar 2023 12:58:22 -0500, "Michael S. Tsirkin" <mst at redhat.com> wrote: > > On Mon, Mar 06, 2023 at 12:15:33PM +0800, Xuan Zhuo wrote: > > > If the queue of xdp xmit is not an independent queue, then when the xdp > > > xmit used all the desc, the xmit from the __dev_queue_xmit() may
2011 Nov 29
4
[RFC] virtio: use mandatory barriers for remote processor vdevs
Virtio is using memory barriers to control the ordering of references to the vrings on SMP systems. When the guest is compiled with SMP support, virtio is only using SMP barriers in order to avoid incurring the overhead involved with mandatory barriers. Lately, though, virtio is being increasingly used with inter-processor communication scenarios too, which involve running two (separate)
2011 Nov 29
4
[RFC] virtio: use mandatory barriers for remote processor vdevs
Virtio is using memory barriers to control the ordering of references to the vrings on SMP systems. When the guest is compiled with SMP support, virtio is only using SMP barriers in order to avoid incurring the overhead involved with mandatory barriers. Lately, though, virtio is being increasingly used with inter-processor communication scenarios too, which involve running two (separate)
2023 Mar 06
4
[PATCH net 0/2] add checking sq is full inside xdp xmit
If the queue of xdp xmit is not an independent queue, then when the xdp xmit used all the desc, the xmit from the __dev_queue_xmit() may encounter the following error. net ens4: Unexpected TXQ (0) queue failure: -28 This patch adds a check whether sq is full in XDP Xmit. Thanks. Xuan Zhuo (2): virtio_net: separate the logic of checking whether sq is full virtio_net: add checking sq is full
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
>>> Maybe I was wrong, but according to Michael's comment it looks like he >>> want >>> check affinity_hint_set just for speculative tx polling on rx napi >>> instead >>> of disabling it at all. >>> >>> And I'm not convinced this is really needed, driver only provide affinity >>> hint instead of affinity, so it's
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
>>> Maybe I was wrong, but according to Michael's comment it looks like he >>> want >>> check affinity_hint_set just for speculative tx polling on rx napi >>> instead >>> of disabling it at all. >>> >>> And I'm not convinced this is really needed, driver only provide affinity >>> hint instead of affinity, so it's
2009 May 29
1
[PATCH 3/4] virtio_net: don't free buffers in xmit ring
The virtio_net driver is complicated by the two methods of freeing old xmit buffers (in addition to freeing old ones at the start of the xmit path). The original code used a 1/10 second timer attached to xmit_free(), reset on every xmit. Before we orphaned skbs on xmit, the transmitting userspace could block with a full socket until the timer fired, the skb destructor was called, and they were
2009 May 29
1
[PATCH 3/4] virtio_net: don't free buffers in xmit ring
The virtio_net driver is complicated by the two methods of freeing old xmit buffers (in addition to freeing old ones at the start of the xmit path). The original code used a 1/10 second timer attached to xmit_free(), reset on every xmit. Before we orphaned skbs on xmit, the transmitting userspace could block with a full socket until the timer fired, the skb destructor was called, and they were
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Thu, Apr 20, 2017 at 2:27 AM, Jason Wang <jasowang at redhat.com> wrote: > > > On 2017?04?19? 04:21, Willem de Bruijn wrote: >> >> +static void virtnet_napi_tx_enable(struct virtnet_info *vi, >> + struct virtqueue *vq, >> + struct napi_struct *napi) >> +{ >> + if
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Thu, Apr 20, 2017 at 2:27 AM, Jason Wang <jasowang at redhat.com> wrote: > > > On 2017?04?19? 04:21, Willem de Bruijn wrote: >> >> +static void virtnet_napi_tx_enable(struct virtnet_info *vi, >> + struct virtqueue *vq, >> + struct napi_struct *napi) >> +{ >> + if