search for: vq2rxq

Displaying 20 results from an estimated 23 matches for "vq2rxq".

2014 Jan 07
0
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...emory. */ struct delayed_work refill; @@ -358,6 +392,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, unsigned int len) { struct skb_vnet_hdr *hdr = ctx->buf; + struct virtnet_info *vi = netdev_priv(dev); + struct receive_queue_stats *rq_stats = vi->rq_stats[vq2rxq(rq->vq)]; int num_buf = hdr->mhdr.num_buffers; struct page *page = virt_to_head_page(ctx->buf); int offset = ctx->buf - page_address(page); @@ -413,7 +449,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, } } - ewma_add(&rq->mrg_avg_pkt_len, head_...
2017 Apr 02
1
[PATCH net-next 3/3] virtio-net: clean tx descriptors from rx napi
...030,34 @@ static int virtnet_receive(struct receive_queue *rq, int budget) return received; } +static void free_old_xmit_skbs(struct send_queue *sq); + +static void virtnet_poll_cleantx(struct receive_queue *rq) +{ + struct virtnet_info *vi = rq->vq->vdev->priv; + unsigned int index = vq2rxq(rq->vq); + struct send_queue *sq = &vi->sq[index]; + struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index); + + if (!sq->napi.weight) + return; + + __netif_tx_lock(txq, smp_processor_id()); + free_old_xmit_skbs(sq); + __netif_tx_unlock(txq); + + if (sq->vq->num_free...
2017 Apr 03
0
[PATCH net-next 3/3] virtio-net: clean tx descriptors from rx napi
...gt; > +static void free_old_xmit_skbs(struct send_queue *sq); > + Could you pls re-arrange code to avoid forward declarations? > +static void virtnet_poll_cleantx(struct receive_queue *rq) > +{ > + struct virtnet_info *vi = rq->vq->vdev->priv; > + unsigned int index = vq2rxq(rq->vq); > + struct send_queue *sq = &vi->sq[index]; > + struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index); > + > + if (!sq->napi.weight) > + return; > + > + __netif_tx_lock(txq, smp_processor_id()); > + free_old_xmit_skbs(sq); > + __netif_tx...
2017 Apr 02
5
[PATCH net-next 0/3] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Based on previous patchsets by Jason Wang: [RFC V7 PATCH 0/7] enable tx interrupts for virtio-net http://lkml.iu.edu/hypermail/linux/kernel/1505.3/00245.html Changes: RFC -> v1: - dropped vhost interrupt moderation patch: not needed and likely expensive at light
2017 Apr 02
5
[PATCH net-next 0/3] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Based on previous patchsets by Jason Wang: [RFC V7 PATCH 0/7] enable tx interrupts for virtio-net http://lkml.iu.edu/hypermail/linux/kernel/1505.3/00245.html Changes: RFC -> v1: - dropped vhost interrupt moderation patch: not needed and likely expensive at light
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2017 Apr 03
2
[PATCH net-next 3/3] virtio-net: clean tx descriptors from rx napi
...arrange code to avoid forward declarations? Okay. I'll do the move in a separate patch to simplify review. >> +static void virtnet_poll_cleantx(struct receive_queue *rq) >> +{ >> + struct virtnet_info *vi = rq->vq->vdev->priv; >> + unsigned int index = vq2rxq(rq->vq); >> + struct send_queue *sq = &vi->sq[index]; >> + struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index); >> + >> + if (!sq->napi.weight) >> + return; >> + >> + __netif_tx_lock(txq, smp_processor_...
2017 Apr 03
2
[PATCH net-next 3/3] virtio-net: clean tx descriptors from rx napi
...arrange code to avoid forward declarations? Okay. I'll do the move in a separate patch to simplify review. >> +static void virtnet_poll_cleantx(struct receive_queue *rq) >> +{ >> + struct virtnet_info *vi = rq->vq->vdev->priv; >> + unsigned int index = vq2rxq(rq->vq); >> + struct send_queue *sq = &vi->sq[index]; >> + struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index); >> + >> + if (!sq->napi.weight) >> + return; >> + >> + __netif_tx_lock(txq, smp_processor_...
2017 Apr 18
8
[PATCH net-next v2 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v1 -> v2: - disable by default - disable unless affinity_hint_set because cache misses add up to a third higher cycle cost, e.g., in TCP_RR tests. This is not limited to the patch that enables tx completion cleaning in rx napi. - use trylock to
2017 Apr 18
8
[PATCH net-next v2 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v1 -> v2: - disable by default - disable unless affinity_hint_set because cache misses add up to a third higher cycle cost, e.g., in TCP_RR tests. This is not limited to the patch that enables tx completion cleaning in rx napi. - use trylock to
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v2 -> v3: - convert __netif_tx_trylock to __netif_tx_lock on tx napi poll ensure that the handler always cleans, to avoid deadlock - unconditionally clean in start_xmit avoid adding an unnecessary "if (use_napi)" branch - remove
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v2 -> v3: - convert __netif_tx_trylock to __netif_tx_lock on tx napi poll ensure that the handler always cleans, to avoid deadlock - unconditionally clean in start_xmit avoid adding an unnecessary "if (use_napi)" branch - remove
2012 Nov 27
4
[net-next rfc v7 0/3] Multiqueue virtio-net
Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. A protype implementation of qemu-kvm support could by found in git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you could specify the queues
2012 Nov 27
4
[net-next rfc v7 0/3] Multiqueue virtio-net
Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. A protype implementation of qemu-kvm support could by found in git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you could specify the queues
2012 Dec 04
3
[PATCH net-next 0/3] Multiqueue support for virtio-net
Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. A protype implementation of qemu-kvm support could by found in git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you could specify the queues
2012 Dec 04
3
[PATCH net-next 0/3] Multiqueue support for virtio-net
Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. A protype implementation of qemu-kvm support could by found in git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you could specify the queues
2012 Dec 05
3
[PATCH net-next v2 0/3] Multiqueue support in virtio-net
Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. A protype implementation of qemu-kvm support could by found in git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you could specify the queues
2012 Dec 05
3
[PATCH net-next v2 0/3] Multiqueue support in virtio-net
Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. A protype implementation of qemu-kvm support could by found in git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you could specify the queues
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. Please review and comments. Changes from v5: - Align the implementation with the RFC spec update v4 - Switch the mode between single mode and multiqueue mode without reset - Remove the 256 limitation