search for: skb_queue_len

Displaying 20 results from an estimated 53 matches for "skb_queue_len".

2016 Dec 29
1
[PATCH net-next V2 3/3] tun: rx batching
From: Jason Wang <jasowang at redhat.com> Date: Wed, 28 Dec 2016 16:09:31 +0800 > + spin_lock(&queue->lock); > + qlen = skb_queue_len(queue); > + if (qlen > rx_batched) > + goto drop; > + __skb_queue_tail(queue, skb); > + if (!more || qlen + 1 > rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; > + } > + spin...
2016 Dec 29
1
[PATCH net-next V2 3/3] tun: rx batching
From: Jason Wang <jasowang at redhat.com> Date: Wed, 28 Dec 2016 16:09:31 +0800 > + spin_lock(&queue->lock); > + qlen = skb_queue_len(queue); > + if (qlen > rx_batched) > + goto drop; > + __skb_queue_tail(queue, skb); > + if (!more || qlen + 1 > rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; > + } > + spin...
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2008 Aug 26
0
[PATCH] xen-netfront: Avoid unaligned accesses to IP header.
...ront.c b/drivers/net/xen-netfront.c index 3c3dd40..7416f41 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -239,11 +239,14 @@ static void xennet_alloc_rx_buffers(struct net_device *dev) */ batch_target = np->rx_target - (req_prod - np->rx.rsp_cons); for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) { - skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD, + skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN, GFP_ATOMIC | __GFP_NOWARN); if (unlikely(!skb)) goto no_skb; + /* Align ip header to a 16 bytes boundary */ + s...
2008 Jul 03
0
[PATCH] xen/netfront: Avoid unaligned accesses to IP datagrams.
...ront.c b/drivers/net/xen-netfront.c index 44aed80..2724688 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -239,11 +239,14 @@ static void xennet_alloc_rx_buffers(struct net_device *dev) */ batch_target = np->rx_target - (req_prod - np->rx.rsp_cons); for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) { - skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD, + skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN, GFP_ATOMIC | __GFP_NOWARN); if (unlikely(!skb)) goto no_skb; + /* Align ip header to a 16 bytes boundary */ + s...
2008 Aug 26
0
[PATCH] xen-netfront: Avoid unaligned accesses to IP header.
...ront.c b/drivers/net/xen-netfront.c index 3c3dd40..7416f41 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -239,11 +239,14 @@ static void xennet_alloc_rx_buffers(struct net_device *dev) */ batch_target = np->rx_target - (req_prod - np->rx.rsp_cons); for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) { - skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD, + skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN, GFP_ATOMIC | __GFP_NOWARN); if (unlikely(!skb)) goto no_skb; + /* Align ip header to a 16 bytes boundary */ + s...
2008 Aug 26
0
[PATCH] xen-netfront: Avoid unaligned accesses to IP header.
...ront.c b/drivers/net/xen-netfront.c index 3c3dd40..7416f41 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -239,11 +239,14 @@ static void xennet_alloc_rx_buffers(struct net_device *dev) */ batch_target = np->rx_target - (req_prod - np->rx.rsp_cons); for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) { - skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD, + skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN, GFP_ATOMIC | __GFP_NOWARN); if (unlikely(!skb)) goto no_skb; + /* Align ip header to a 16 bytes boundary */ + s...
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
...*tfile, return skb; } +static void tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb, + int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + int qlen; + bool rcv = false; + + spin_lock(&queue->lock); + qlen = skb_queue_len(queue); + __skb_queue_tail(queue, skb); + if (!more || qlen == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } + spin_unlock(&queue->lock); + + if (rcv) { + local_bh_disable(); + while ((skb = __skb_de...
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
...le *tfile, return skb; } +static int tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb, + int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + int qlen; + bool rcv = false; + + spin_lock(&queue->lock); + qlen = skb_queue_len(queue); + if (qlen > rx_batched) + goto drop; + __skb_queue_tail(queue, skb); + if (!more || qlen + 1 > rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } + spin_unlock(&queue->lock); + + if (rcv) {...
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...struct sk_buff_head process_queue; + u32 rx_batched = tun->rx_batched; + bool rcv = false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + spin_lock(&queue->lock); + if (!more || skb_queue_len(queue) == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } else { + __skb_queue_tail(queue, skb); + } + spin_unlock(&queue->lock); + + if (rcv) { + struct sk_buff *nskb; + + local_bh_disable(); + whil...
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
...struct sk_buff_head process_queue; + u32 rx_batched = tun->rx_batched; + bool rcv = false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + spin_lock(&queue->lock); + if (!more || skb_queue_len(queue) == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } else { + __skb_queue_tail(queue, skb); + } + spin_unlock(&queue->lock); + + if (rcv) { + struct sk_buff *nskb; + local_bh_disable(); + while...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...un->rx_batched; > + bool rcv = false; > + > + if (!rx_batched || (!more && skb_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + return; > + } > + > + spin_lock(&queue->lock); > + if (!more || skb_queue_len(queue) == rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; > + } else { > + __skb_queue_tail(queue, skb); > + } > + spin_unlock(&queue->lock); > + > + if (rcv) { > + stru...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...un->rx_batched; > + bool rcv = false; > + > + if (!rx_batched || (!more && skb_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + return; > + } > + > + spin_lock(&queue->lock); > + if (!more || skb_queue_len(queue) == rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; > + } else { > + __skb_queue_tail(queue, skb); > + } > + spin_unlock(&queue->lock); > + > + if (rcv) { > + stru...
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%