search for: sk_buff_head

Displaying 20 results from an estimated 69 matches for "sk_buff_head".

2017 Jan 03
2
[PATCH net-next V2 3/3] tun: rx batching
On Wed, Dec 28, 2016 at 04:09:31PM +0800, Jason Wang wrote: > +static int tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb, > + int more) > +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + int qlen; > + bool rcv = false; > + > + spin_lock(&queue->lock); Should this be spin_lock_bh()? Below and in tun_get_user() there are explicit local_bh_disable() calls so I guess BHs can in...
2017 Jan 03
2
[PATCH net-next V2 3/3] tun: rx batching
On Wed, Dec 28, 2016 at 04:09:31PM +0800, Jason Wang wrote: > +static int tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb, > + int more) > +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + int qlen; > + bool rcv = false; > + > + spin_lock(&queue->lock); Should this be spin_lock_bh()? Below and in tun_get_user() there are explicit local_bh_disable() calls so I guess BHs can in...
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 04
0
[PATCH net-next V2 3/3] tun: rx batching
On 2017?01?03? 21:33, Stefan Hajnoczi wrote: > On Wed, Dec 28, 2016 at 04:09:31PM +0800, Jason Wang wrote: >> +static int tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb, >> + int more) >> +{ >> + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; >> + struct sk_buff_head process_queue; >> + int qlen; >> + bool rcv = false; >> + >> + spin_lock(&queue->lock); > Should this be spin_lock_bh()? Below and in tun_get_user() there are > explicit local_bh_disable(...
2007 Dec 21
0
[kvm-devel] [Virtio-for-kvm] [PATCH 6/13] [Mostly resend] virtio additions
...oalescing. */ + struct hrtimer tx_timer; + /* Number of input buffers, and max we've ever had. */ unsigned int num, max; + /* Number of queued output buffers, and max we've ever had. */ + unsigned int out_num, out_max; + /* Receive & send queues. */ struct sk_buff_head recv; struct sk_buff_head send; @@ -223,6 +230,20 @@ static void free_old_xmit_skbs(struct virtnet_info *vi) } } +static enum hrtimer_restart kick_xmit(struct hrtimer *t) +{ + struct virtnet_info *vi = container_of(t,struct virtnet_info,tx_timer); + + BUG_ON(!in_softirq()); +...
2007 Dec 21
0
[kvm-devel] [Virtio-for-kvm] [PATCH 6/13] [Mostly resend] virtio additions
...oalescing. */ + struct hrtimer tx_timer; + /* Number of input buffers, and max we've ever had. */ unsigned int num, max; + /* Number of queued output buffers, and max we've ever had. */ + unsigned int out_num, out_max; + /* Receive & send queues. */ struct sk_buff_head recv; struct sk_buff_head send; @@ -223,6 +230,20 @@ static void free_old_xmit_skbs(struct virtnet_info *vi) } } +static enum hrtimer_restart kick_xmit(struct hrtimer *t) +{ + struct virtnet_info *vi = container_of(t,struct virtnet_info,tx_timer); + + BUG_ON(!in_softirq()); +...
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
...(&tfile->sk.sk_write_queue); skb_queue_purge(&tfile->sk.sk_error_queue); } @@ -1140,10 +1145,36 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, return skb; } +static void tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb, + int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + int qlen; + bool rcv = false; + + spin_lock(&queue->lock); + qlen = skb_queue_len(queue); + __skb_queue_tail(queue, skb); + if (!more || qlen == rx_batched) { + __skb_queue_head_init(&process_queue); + sk...
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
...ge(&tfile->sk.sk_write_queue); skb_queue_purge(&tfile->sk.sk_error_queue); } @@ -1140,10 +1145,44 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, return skb; } +static int tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb, + int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + int qlen; + bool rcv = false; + + spin_lock(&queue->lock); + qlen = skb_queue_len(queue); + if (qlen > rx_batched) + goto drop; + __skb_queue_tail(queue, skb); + if (!more || qlen + 1 > rx_batched) { +...
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...ite_queue); skb_queue_purge(&tfile->sk.sk_error_queue); } @@ -1139,10 +1141,46 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, return skb; } +static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile, + struct sk_buff *skb, int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + u32 rx_batched = tun->rx_batched; + bool rcv = false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + s...
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
...ite_queue); skb_queue_purge(&tfile->sk.sk_error_queue); } @@ -1140,10 +1142,45 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, return skb; } +static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile, + struct sk_buff *skb, int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + u32 rx_batched = tun->rx_batched; + bool rcv = false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + s...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
..._error_queue); > } > > @@ -1140,10 +1142,45 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, > return skb; > } > > +static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile, > + struct sk_buff *skb, int more) > +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + u32 rx_batched = tun->rx_batched; > + bool rcv = false; > + > + if (!rx_batched || (!more && skb_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > +...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
..._error_queue); > } > > @@ -1140,10 +1142,45 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile, > return skb; > } > > +static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile, > + struct sk_buff *skb, int more) > +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + u32 rx_batched = tun->rx_batched; > + bool rcv = false; > + > + if (!rx_batched || (!more && skb_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > +...
2008 May 26
7
[PATCH 1/3] virtio: fix virtio_net xmit of freed skb bug
If we fail to transmit a packet, we assume the queue is full and put the skb into last_xmit_skb. However, if more space frees up before we xmit it, we loop, and the result can be transmitting the same skb twice. Fix is simple: set skb to NULL if we've used it in some way, and check before sending. Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> --- drivers/net/virtio_net.c |
2008 May 26
7
[PATCH 1/3] virtio: fix virtio_net xmit of freed skb bug
If we fail to transmit a packet, we assume the queue is full and put the skb into last_xmit_skb. However, if more space frees up before we xmit it, we loop, and the result can be transmitting the same skb twice. Fix is simple: set skb to NULL if we've used it in some way, and check before sending. Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> --- drivers/net/virtio_net.c |
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%