search for: local_bh_disable

Displaying 20 results from an estimated 144 matches for "local_bh_disable".

2016 Dec 31
1
[PATCH net-next V3 3/3] tun: rx batching
...jasowang at redhat.com> Date: Fri, 30 Dec 2016 13:20:51 +0800 > @@ -1283,10 +1314,15 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, > skb_probe_transport_header(skb, 0); > > rxhash = skb_get_hash(skb); > + > #ifndef CONFIG_4KSTACKS > - local_bh_disable(); > - netif_receive_skb(skb); > - local_bh_enable(); > + if (!rx_batched) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + } else { > + tun_rx_batched(tfile, skb, more); > + } > #else > netif_rx_ni(skb); > #endif If rx_ba...
2016 Dec 31
1
[PATCH net-next V3 3/3] tun: rx batching
...jasowang at redhat.com> Date: Fri, 30 Dec 2016 13:20:51 +0800 > @@ -1283,10 +1314,15 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, > skb_probe_transport_header(skb, 0); > > rxhash = skb_get_hash(skb); > + > #ifndef CONFIG_4KSTACKS > - local_bh_disable(); > - netif_receive_skb(skb); > - local_bh_enable(); > + if (!rx_batched) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + } else { > + tun_rx_batched(tfile, skb, more); > + } > #else > netif_rx_ni(skb); > #endif If rx_ba...
2017 Jan 03
2
[PATCH net-next V2 3/3] tun: rx batching
...int more) > +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + int qlen; > + bool rcv = false; > + > + spin_lock(&queue->lock); Should this be spin_lock_bh()? Below and in tun_get_user() there are explicit local_bh_disable() calls so I guess BHs can interrupt us here and this would deadlock. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualizat...
2017 Jan 03
2
[PATCH net-next V2 3/3] tun: rx batching
...int more) > +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + int qlen; > + bool rcv = false; > + > + spin_lock(&queue->lock); Should this be spin_lock_bh()? Below and in tun_get_user() there are explicit local_bh_disable() calls so I guess BHs can interrupt us here and this would deadlock. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualizat...
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
...(&queue->lock); + qlen = skb_queue_len(queue); + __skb_queue_tail(queue, skb); + if (!more || qlen == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } + spin_unlock(&queue->lock); + + if (rcv) { + local_bh_disable(); + while ((skb = __skb_dequeue(&process_queue))) + netif_receive_skb(skb); + local_bh_enable(); + } +} + /* Get packet from user space buffer */ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, void *msg_control, struct iov_iter *from, - int nobl...
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
...eue); + if (qlen > rx_batched) + goto drop; + __skb_queue_tail(queue, skb); + if (!more || qlen + 1 > rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } + spin_unlock(&queue->lock); + + if (rcv) { + local_bh_disable(); + while ((skb = __skb_dequeue(&process_queue))) + netif_receive_skb(skb); + local_bh_enable(); + } + + return 0; +drop: + spin_unlock(&queue->lock); + kfree_skb(skb); + return -EFAULT; +} + /* Get packet from user space buffer */ static ssize_t tun_get_user(struct tun_struct *tu...
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...n, struct tun_file *tfile, + struct sk_buff *skb, int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + u32 rx_batched = tun->rx_batched; + bool rcv = false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + spin_lock(&queue->lock); + if (!more || skb_queue_len(queue) == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } else { + __skb_queu...
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
...n, struct tun_file *tfile, + struct sk_buff *skb, int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + u32 rx_batched = tun->rx_batched; + bool rcv = false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + spin_lock(&queue->lock); + if (!more || skb_queue_len(queue) == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } else { + __skb_queu...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2012 Oct 22
4
xen_evtchn_do_upcall
...i, Is anybody know the purpose of this method (xen_evtchn_do_upcall)? When I run a user level application involved in TCP receiving and the SoftIRQ for eth0 on the same CPU core, everything is OK. But if I run them on 2 different cores, there will be xen_evtchn_do_upcall() existing (maybe when the local_bh_disable<http://www.cs.fsu.edu/~baker/devices/lxr/http/ident?i=local_bh_disable>() or local_bh_enable<http://www.cs.fsu.edu/~baker/devices/lxr/http/ident?i=local_bh_disable>() is called) in __inet_lookup_established() routine which costs longer time than the first scenario. Is it due to the sync...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...sk_buff *skb, int more) > +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + u32 rx_batched = tun->rx_batched; > + bool rcv = false; > + > + if (!rx_batched || (!more && skb_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + return; > + } > + > + spin_lock(&queue->lock); > + if (!more || skb_queue_len(queue) == rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_qu...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...sk_buff *skb, int more) > +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + u32 rx_batched = tun->rx_batched; > + bool rcv = false; > + > + if (!rx_batched || (!more && skb_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + return; > + } > + > + spin_lock(&queue->lock); > + if (!more || skb_queue_len(queue) == rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_qu...
2018 Sep 06
1
[PATCH net-next 05/11] tuntap: tweak on the path of non-xdp case in tun_build_skb()
...r simplicity > * we do XDP on skb in case the headroom is not enough. > */ > - if (hdr->gso_type || !xdp_prog) > + if (hdr->gso_type || !xdp_prog) { > *skb_xdp = 1; > - else > - *skb_xdp = 0; > + goto build; > + } > + > + *skb_xdp = 0; > > local_bh_disable(); > rcu_read_lock(); > @@ -1724,6 +1726,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, > rcu_read_unlock(); > local_bh_enable(); > > +build: But this is spaghetti code. Please just put common code into functions and call them, don't goto. > s...