search for: skb_queue_splice_tail_init

Displaying 16 results from an estimated 16 matches for "skb_queue_splice_tail_init".

2016 Dec 29
1
[PATCH net-next V2 3/3] tun: rx batching
...28 Dec 2016 16:09:31 +0800 > + spin_lock(&queue->lock); > + qlen = skb_queue_len(queue); > + if (qlen > rx_batched) > + goto drop; > + __skb_queue_tail(queue, skb); > + if (!more || qlen + 1 > rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; > + } > + spin_unlock(&queue->lock); Since you always clear the 'queue' when you insert the skb that hits the limit, I don't see how the "goto drop" path can be possibly taken.
2016 Dec 29
1
[PATCH net-next V2 3/3] tun: rx batching
...28 Dec 2016 16:09:31 +0800 > + spin_lock(&queue->lock); > + qlen = skb_queue_len(queue); > + if (qlen > rx_batched) > + goto drop; > + __skb_queue_tail(queue, skb); > + if (!more || qlen + 1 > rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; > + } > + spin_unlock(&queue->lock); Since you always clear the 'queue' when you insert the skb that hits the limit, I don't see how the "goto drop" path can be possibly taken.
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
...ad *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + int qlen; + bool rcv = false; + + spin_lock(&queue->lock); + qlen = skb_queue_len(queue); + __skb_queue_tail(queue, skb); + if (!more || qlen == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } + spin_unlock(&queue->lock); + + if (rcv) { + local_bh_disable(); + while ((skb = __skb_dequeue(&process_queue))) + netif_receive_skb(skb); + local_bh_enable(); + } +} + /* Get packet from user space buffer */ static ssize_t tun_get_u...
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
...struct sk_buff_head process_queue; + int qlen; + bool rcv = false; + + spin_lock(&queue->lock); + qlen = skb_queue_len(queue); + if (qlen > rx_batched) + goto drop; + __skb_queue_tail(queue, skb); + if (!more || qlen + 1 > rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } + spin_unlock(&queue->lock); + + if (rcv) { + local_bh_disable(); + while ((skb = __skb_dequeue(&process_queue))) + netif_receive_skb(skb); + local_bh_enable(); + } + + return 0; +drop: + spin_unlock(&queue->lock); + kfree_skb(sk...
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + spin_lock(&queue->lock); + if (!more || skb_queue_len(queue) == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } else { + __skb_queue_tail(queue, skb); + } + spin_unlock(&queue->lock); + + if (rcv) { + struct sk_buff *nskb; + + local_bh_disable(); + while ((nskb = __skb_dequeue(&process_queue))) + netif_receive_skb(nskb); + netif_receive_skb(skb...
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
...false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + spin_lock(&queue->lock); + if (!more || skb_queue_len(queue) == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } else { + __skb_queue_tail(queue, skb); + } + spin_unlock(&queue->lock); + + if (rcv) { + struct sk_buff *nskb; + local_bh_disable(); + while ((nskb = __skb_dequeue(&process_queue))) + netif_receive_skb(nskb); + netif_receive_skb(skb);...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...b_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + return; > + } > + > + spin_lock(&queue->lock); > + if (!more || skb_queue_len(queue) == rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; > + } else { > + __skb_queue_tail(queue, skb); > + } > + spin_unlock(&queue->lock); > + > + if (rcv) { > + struct sk_buff *nskb; > + local_bh_disable(); > + while ((nskb = __skb_dequeue(&process_queue))) >...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...b_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + return; > + } > + > + spin_lock(&queue->lock); > + if (!more || skb_queue_len(queue) == rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; > + } else { > + __skb_queue_tail(queue, skb); > + } > + spin_unlock(&queue->lock); > + > + if (rcv) { > + struct sk_buff *nskb; > + local_bh_disable(); > + while ((nskb = __skb_dequeue(&process_queue))) >...
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%