similar to: [PATCH 7/8] netback: split event channels support

Displaying 20 results from an estimated 200 matches similar to: "[PATCH 7/8] netback: split event channels support"

2013 May 21
1
[PATCH net-next V2 2/2] xen-netfront: split event channels support for Xen frontend driver
This patch adds a new feature called feature-split-event-channels for netfront, enabling it to handle TX and RX events separately. If netback does not support this feature, it falls back to use single event channel. Signed-off-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> --- drivers/net/xen-netfront.c | 173
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
>>> Maybe I was wrong, but according to Michael's comment it looks like he >>> want >>> check affinity_hint_set just for speculative tx polling on rx napi >>> instead >>> of disabling it at all. >>> >>> And I'm not convinced this is really needed, driver only provide affinity >>> hint instead of affinity, so it's
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
>>> Maybe I was wrong, but according to Michael's comment it looks like he >>> want >>> check affinity_hint_set just for speculative tx polling on rx napi >>> instead >>> of disabling it at all. >>> >>> And I'm not convinced this is really needed, driver only provide affinity >>> hint instead of affinity, so it's
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Thu, Apr 20, 2017 at 2:27 AM, Jason Wang <jasowang at redhat.com> wrote: > > > On 2017?04?19? 04:21, Willem de Bruijn wrote: >> >> +static void virtnet_napi_tx_enable(struct virtnet_info *vi, >> + struct virtqueue *vq, >> + struct napi_struct *napi) >> +{ >> + if
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Thu, Apr 20, 2017 at 2:27 AM, Jason Wang <jasowang at redhat.com> wrote: > > > On 2017?04?19? 04:21, Willem de Bruijn wrote: >> >> +static void virtnet_napi_tx_enable(struct virtnet_info *vi, >> + struct virtqueue *vq, >> + struct napi_struct *napi) >> +{ >> + if
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Mon, Apr 24, 2017 at 12:40 PM, Michael S. Tsirkin <mst at redhat.com> wrote: > On Fri, Apr 21, 2017 at 10:50:12AM -0400, Willem de Bruijn wrote: >> >>> Maybe I was wrong, but according to Michael's comment it looks like he >> >>> want >> >>> check affinity_hint_set just for speculative tx polling on rx napi >> >>> instead
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Mon, Apr 24, 2017 at 12:40 PM, Michael S. Tsirkin <mst at redhat.com> wrote: > On Fri, Apr 21, 2017 at 10:50:12AM -0400, Willem de Bruijn wrote: >> >>> Maybe I was wrong, but according to Michael's comment it looks like he >> >>> want >> >>> check affinity_hint_set just for speculative tx polling on rx napi >> >>> instead
2017 Apr 21
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
On 2017?04?20? 21:58, Willem de Bruijn wrote: > On Thu, Apr 20, 2017 at 2:27 AM, Jason Wang <jasowang at redhat.com> wrote: >> >> On 2017?04?19? 04:21, Willem de Bruijn wrote: >>> +static void virtnet_napi_tx_enable(struct virtnet_info *vi, >>> + struct virtqueue *vq, >>> + struct
2017 Apr 24
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Fri, Apr 21, 2017 at 10:50:12AM -0400, Willem de Bruijn wrote: > >>> Maybe I was wrong, but according to Michael's comment it looks like he > >>> want > >>> check affinity_hint_set just for speculative tx polling on rx napi > >>> instead > >>> of disabling it at all. > >>> > >>> And I'm not convinced
2017 Apr 24
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
On Mon, Apr 24, 2017 at 01:05:45PM -0400, Willem de Bruijn wrote: > On Mon, Apr 24, 2017 at 12:40 PM, Michael S. Tsirkin <mst at redhat.com> wrote: > > On Fri, Apr 21, 2017 at 10:50:12AM -0400, Willem de Bruijn wrote: > >> >>> Maybe I was wrong, but according to Michael's comment it looks like he > >> >>> want > >> >>> check
2017 Apr 18
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
From: Willem de Bruijn <willemb at google.com> Convert virtio-net to a standard napi tx completion path. This enables better TCP pacing using TCP small queues and increases single stream throughput. The virtio-net driver currently cleans tx descriptors on transmission of new packets in ndo_start_xmit. Latency depends on new traffic, so is unbounded. To avoid deadlock when a socket reaches
2013 Jul 02
3
[PATCH RFC] xen-netback: remove guest RX path dependence on MAX_SKB_FRAGS
This dependence is undesirable and logically incorrect. It''s undesirable because Xen network protocol should not depend on a OS-specific constant. It''s incorrect because the ring slots required doesn''t correspond to the number of frags a SKB has (consider compound page frags). This patch removes this dependence by correctly counting the ring slots required.
2013 Feb 01
45
netback Oops then xenwatch stuck in D state
We''ve been hitting the following issue on a variety of hosts and recent Xen/dom0 version combinations. Here''s an excerpt from our latest: Xen: 4.1.4 (xenbits @ 23432) Dom0: 3.7.1-x86_64 BUG: unable to handle kernel NULL pointer dereference at 000000000000001c IP: [<ffffffff8141a301>] evtchn_from_irq+0x11/0x40 PGD 0 Oops: 0000 [#1] SMP Modules linked in: ebt_comment
2014 Dec 19
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
On 2014/12/1 18:17, Jason Wang wrote: > On newer hosts that support delayed tx interrupts, > we probably don't have much to gain from orphaning > packets early. > > Note: this might degrade performance for > hosts without event idx support. > Should be addressed by the next patch. > > Cc: Rusty Russell <rusty at rustcorp.com.au> > Cc: Michael S. Tsirkin
2014 Dec 19
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
On 2014/12/1 18:17, Jason Wang wrote: > On newer hosts that support delayed tx interrupts, > we probably don't have much to gain from orphaning > packets early. > > Note: this might degrade performance for > hosts without event idx support. > Should be addressed by the next patch. > > Cc: Rusty Russell <rusty at rustcorp.com.au> > Cc: Michael S. Tsirkin
2013 Feb 06
0
[PATCH 1/4] xen/netback: shutdown the ring if it contains garbage.
A buggy or malicious frontend should not be able to confuse netback. If we spot anything which is not as it should be then shutdown the device and don''t try to continue with the ring in a potentially hostile state. Well behaved and non-hostile frontends will not be penalised. As well as making the existing checks for such errors fatal also add a new check that ensures that there
2013 Oct 10
3
[PATCH net-next v3 5/5] xen-netback: enable IPv6 TCP GSO to the guest
This patch adds code to handle SKB_GSO_TCPV6 skbs and construct appropriate extra or prefix segments to pass the large packet to the frontend. New xenstore flags, feature-gso-tcpv6 and feature-gso-tcpv6-prefix, are sampled to determine if the frontend is capable of handling such packets. Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: David
2013 Jun 24
3
[PATCH v2] xen-netback: add a pseudo pps rate limit
VM traffic is already limited by a throughput limit, but there is no control over the maximum packet per second (PPS). In DDOS attack the major issue is rather PPS than throughput. With provider offering more bandwidth to VMs, it becames easy to coordinate a massive attack using VMs. Example: 100Mbits ~ 200kpps using 64B packets. This patch provides a new option to limit VMs maximum packets per
2013 Jul 09
20
[PATCH 1/1] xen/netback: correctly calculate required slots of skb.
When counting required slots for skb, netback directly uses DIV_ROUND_UP to get slots required by header data. This is wrong when offset in the page of header data is not zero, and is also inconsistent with following calculation for required slot in netbk_gop_skb. In netbk_gop_skb, required slots are calculated based on offset and len in page of header data. It is possible that required slots
2013 Oct 28
3
[PATCH net V2] xen-netback: use jiffies_64 value to calculate credit timeout
time_after_eq() only works if the delta is < MAX_ULONG/2. For a 32bit Dom0, if netfront sends packets at a very low rate, the time between subsequent calls to tx_credit_exceeded() may exceed MAX_ULONG/2 and the test for timer_after_eq() will be incorrect. Credit will not be replenished and the guest may become unable to send packets (e.g., if prior to the long gap, all credit was exhausted).