Displaying 16 results from an estimated 16 matches for "tx_irq".
Did you mean:
rx_irq
2013 Feb 15
1
[PATCH 7/8] netback: split event channels support
Netback and netfront only use one event channel to do tx / rx notification.
This may cause unnecessary wake-up of process routines. This patch adds a new
feature called feautre-split-event-channel to netback, enabling it to handle
Tx and Rx event separately.
Netback will use tx_irq to notify guest for tx completion, rx_irq for rx
notification.
If frontend doesn''t support this feature, tx_irq = rx_irq.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
drivers/net/xen-netback/common.h | 10 +++--
drivers/net/xen-netback/interface.c | 78 +++++++++++++++++...
2013 May 21
1
[PATCH net-next V2 2/2] xen-netfront: split event channels support for Xen frontend driver
...drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -85,7 +85,15 @@ struct netfront_info {
struct napi_struct napi;
- unsigned int evtchn;
+ /* Split event channels support, tx_* == rx_* when using
+ * single event channel.
+ */
+ unsigned int tx_evtchn, rx_evtchn;
+ unsigned int tx_irq, rx_irq;
+ /* Only used when split event channels support is enabled */
+ char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+ char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+
struct xenbus_device *xbdev;
spinlock_t tx_lock;
@@ -330,7 +338,7 @@ no_skb:
push:
RING_PUSH_REQUESTS_AND_CHECK_NOT...
2014 Dec 19
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...uct netdev_queue *txq = netdev_get_tx_queue(dev, qnum);
> bool kick = !skb->xmit_more;
>
> - /* Free up any pending old buffers before queueing new ones. */
> - free_old_xmit_skbs(sq);
I think there is no need to remove free_old_xmit_skbs here.
you could add free_old_xmit_skbs in tx_irq's napi func.
also could do this in start_xmit if you handle the race well.
I have done the same thing in ixgbe driver(free skb in ndo_start_xmit
and both in tx_irq's poll func), and it seems work well:)
I think there would be no so much interrupts in this way, also tx
interrupt coalesce...
2014 Dec 19
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...uct netdev_queue *txq = netdev_get_tx_queue(dev, qnum);
> bool kick = !skb->xmit_more;
>
> - /* Free up any pending old buffers before queueing new ones. */
> - free_old_xmit_skbs(sq);
I think there is no need to remove free_old_xmit_skbs here.
you could add free_old_xmit_skbs in tx_irq's napi func.
also could do this in start_xmit if you handle the race well.
I have done the same thing in ixgbe driver(free skb in ndo_start_xmit
and both in tx_irq's poll func), and it seems work well:)
I think there would be no so much interrupts in this way, also tx
interrupt coalesce...
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
...y regardless of whether the
>> optimization is used.
>
>
> Yes, I noticed this in the past too.
>
>> Though this is not limited to napi-tx, it is more
>> pronounced in that mode than without napi.
>>
>> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}:
>>
>> upstream:
>>
>> 1,1,1: 28985 Mbps, 278 Gcyc
>> 1,0,2: 30067 Mbps, 402 Gcyc
>>
>> napi tx:
>>
>> 1,1,1: 34492 Mbps, 269 Gcyc
>> 1,0,2: 36527 Mbps, 537 Gcyc (!)
>> 1,0,1: 36269 Mbps, 394 Gcyc
>> 1,0,0: 34674 Mbps, 402 Gcy...
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
...y regardless of whether the
>> optimization is used.
>
>
> Yes, I noticed this in the past too.
>
>> Though this is not limited to napi-tx, it is more
>> pronounced in that mode than without napi.
>>
>> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}:
>>
>> upstream:
>>
>> 1,1,1: 28985 Mbps, 278 Gcyc
>> 1,0,2: 30067 Mbps, 402 Gcyc
>>
>> napi tx:
>>
>> 1,1,1: 34492 Mbps, 269 Gcyc
>> 1,0,2: 36527 Mbps, 537 Gcyc (!)
>> 1,0,1: 36269 Mbps, 394 Gcyc
>> 1,0,0: 34674 Mbps, 402 Gcy...
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...a win over keeping it off, even without irq
affinity.
The cycle cost is significant without affinity regardless of whether the
optimization is used. Though this is not limited to napi-tx, it is more
pronounced in that mode than without napi.
1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}:
upstream:
1,1,1: 28985 Mbps, 278 Gcyc
1,0,2: 30067 Mbps, 402 Gcyc
napi tx:
1,1,1: 34492 Mbps, 269 Gcyc
1,0,2: 36527 Mbps, 537 Gcyc (!)
1,0,1: 36269 Mbps, 394 Gcyc
1,0,0: 34674 Mbps, 402 Gcyc
This is a particularly strong example. It is also representative
of most RR tests. It is less pronoun...
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...a win over keeping it off, even without irq
affinity.
The cycle cost is significant without affinity regardless of whether the
optimization is used. Though this is not limited to napi-tx, it is more
pronounced in that mode than without napi.
1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}:
upstream:
1,1,1: 28985 Mbps, 278 Gcyc
1,0,2: 30067 Mbps, 402 Gcyc
napi tx:
1,1,1: 34492 Mbps, 269 Gcyc
1,0,2: 36527 Mbps, 537 Gcyc (!)
1,0,1: 36269 Mbps, 394 Gcyc
1,0,0: 34674 Mbps, 402 Gcyc
This is a particularly strong example. It is also representative
of most RR tests. It is less pronoun...
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...>> >
>> > Yes, I noticed this in the past too.
>> >
>> >> Though this is not limited to napi-tx, it is more
>> >> pronounced in that mode than without napi.
>> >>
>> >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}:
>> >>
>> >> upstream:
>> >>
>> >> 1,1,1: 28985 Mbps, 278 Gcyc
>> >> 1,0,2: 30067 Mbps, 402 Gcyc
>> >>
>> >> napi tx:
>> >>
>> >> 1,1,1: 34492 Mbps, 269 Gcyc
>> >> 1,0,2: 36527 M...
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...>> >
>> > Yes, I noticed this in the past too.
>> >
>> >> Though this is not limited to napi-tx, it is more
>> >> pronounced in that mode than without napi.
>> >>
>> >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}:
>> >>
>> >> upstream:
>> >>
>> >> 1,1,1: 28985 Mbps, 278 Gcyc
>> >> 1,0,2: 30067 Mbps, 402 Gcyc
>> >>
>> >> napi tx:
>> >>
>> >> 1,1,1: 34492 Mbps, 269 Gcyc
>> >> 1,0,2: 36527 M...
2017 Apr 21
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...le cost is significant without affinity regardless of whether the
> optimization is used.
Yes, I noticed this in the past too.
> Though this is not limited to napi-tx, it is more
> pronounced in that mode than without napi.
>
> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}:
>
> upstream:
>
> 1,1,1: 28985 Mbps, 278 Gcyc
> 1,0,2: 30067 Mbps, 402 Gcyc
>
> napi tx:
>
> 1,1,1: 34492 Mbps, 269 Gcyc
> 1,0,2: 36527 Mbps, 537 Gcyc (!)
> 1,0,1: 36269 Mbps, 394 Gcyc
> 1,0,0: 34674 Mbps, 402 Gcyc
>
> This is a particularly strong exampl...
2017 Apr 24
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...imization is used.
> >
> >
> > Yes, I noticed this in the past too.
> >
> >> Though this is not limited to napi-tx, it is more
> >> pronounced in that mode than without napi.
> >>
> >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}:
> >>
> >> upstream:
> >>
> >> 1,1,1: 28985 Mbps, 278 Gcyc
> >> 1,0,2: 30067 Mbps, 402 Gcyc
> >>
> >> napi tx:
> >>
> >> 1,1,1: 34492 Mbps, 269 Gcyc
> >> 1,0,2: 36527 Mbps, 537 Gcyc (!)
> >> 1,0,1: 3...
2017 Apr 24
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...gt; Yes, I noticed this in the past too.
> >> >
> >> >> Though this is not limited to napi-tx, it is more
> >> >> pronounced in that mode than without napi.
> >> >>
> >> >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}:
> >> >>
> >> >> upstream:
> >> >>
> >> >> 1,1,1: 28985 Mbps, 278 Gcyc
> >> >> 1,0,2: 30067 Mbps, 402 Gcyc
> >> >>
> >> >> napi tx:
> >> >>
> >> >> 1,1,1: 34492 Mb...
2017 Apr 18
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
From: Willem de Bruijn <willemb at google.com>
Convert virtio-net to a standard napi tx completion path. This enables
better TCP pacing using TCP small queues and increases single stream
throughput.
The virtio-net driver currently cleans tx descriptors on transmission
of new packets in ndo_start_xmit. Latency depends on new traffic, so
is unbounded. To avoid deadlock when a socket reaches
2014 Dec 01
9
[PATCH RFC v4 net-next 0/5] virtio_net: enabling tx interrupts
Hello:
We used to orphan packets before transmission for virtio-net. This breaks
socket accounting and can lead serveral functions won't work, e.g:
- Byte Queue Limit depends on tx completion nofication to work.
- Packet Generator depends on tx completion nofication for the last
transmitted packet to complete.
- TCP Small Queue depends on proper accounting of sk_wmem_alloc to work.
This
2014 Dec 01
9
[PATCH RFC v4 net-next 0/5] virtio_net: enabling tx interrupts
Hello:
We used to orphan packets before transmission for virtio-net. This breaks
socket accounting and can lead serveral functions won't work, e.g:
- Byte Queue Limit depends on tx completion nofication to work.
- Packet Generator depends on tx completion nofication for the last
transmitted packet to complete.
- TCP Small Queue depends on proper accounting of sk_wmem_alloc to work.
This