search for: netif_receive_skb

Displaying 20 results from an estimated 136 matches for "netif_receive_skb".

2016 Dec 31
1
[PATCH net-next V3 3/3] tun: rx batching
...Date: Fri, 30 Dec 2016 13:20:51 +0800 > @@ -1283,10 +1314,15 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, > skb_probe_transport_header(skb, 0); > > rxhash = skb_get_hash(skb); > + > #ifndef CONFIG_4KSTACKS > - local_bh_disable(); > - netif_receive_skb(skb); > - local_bh_enable(); > + if (!rx_batched) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + } else { > + tun_rx_batched(tfile, skb, more); > + } > #else > netif_rx_ni(skb); > #endif If rx_batched has been set, and we a...
2016 Dec 31
1
[PATCH net-next V3 3/3] tun: rx batching
...Date: Fri, 30 Dec 2016 13:20:51 +0800 > @@ -1283,10 +1314,15 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, > skb_probe_transport_header(skb, 0); > > rxhash = skb_get_hash(skb); > + > #ifndef CONFIG_4KSTACKS > - local_bh_disable(); > - netif_receive_skb(skb); > - local_bh_enable(); > + if (!rx_batched) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + } else { > + tun_rx_batched(tfile, skb, more); > + } > #else > netif_rx_ni(skb); > #endif If rx_batched has been set, and we a...
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...le, + struct sk_buff *skb, int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + u32 rx_batched = tun->rx_batched; + bool rcv = false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + spin_lock(&queue->lock); + if (!more || skb_queue_len(queue) == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } else { + __skb_queue_tail(queue, skb); + }...
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
...le, + struct sk_buff *skb, int more) +{ + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; + struct sk_buff_head process_queue; + u32 rx_batched = tun->rx_batched; + bool rcv = false; + + if (!rx_batched || (!more && skb_queue_empty(queue))) { + local_bh_disable(); + netif_receive_skb(skb); + local_bh_enable(); + return; + } + + spin_lock(&queue->lock); + if (!more || skb_queue_len(queue) == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } else { + __skb_queue_tail(queue, skb); + }...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
...e, skb); + if (!more || qlen == rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } + spin_unlock(&queue->lock); + + if (rcv) { + local_bh_disable(); + while ((skb = __skb_dequeue(&process_queue))) + netif_receive_skb(skb); + local_bh_enable(); + } +} + /* Get packet from user space buffer */ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, void *msg_control, struct iov_iter *from, - int noblock) + int noblock, bool more) { struct tun_pi pi = { 0, cpu_to_be16(...
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
...); + if (!more || qlen + 1 > rx_batched) { + __skb_queue_head_init(&process_queue); + skb_queue_splice_tail_init(queue, &process_queue); + rcv = true; + } + spin_unlock(&queue->lock); + + if (rcv) { + local_bh_disable(); + while ((skb = __skb_dequeue(&process_queue))) + netif_receive_skb(skb); + local_bh_enable(); + } + + return 0; +drop: + spin_unlock(&queue->lock); + kfree_skb(skb); + return -EFAULT; +} + /* Get packet from user space buffer */ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, void *msg_control, struct iov_iter *from, -...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...; +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + u32 rx_batched = tun->rx_batched; > + bool rcv = false; > + > + if (!rx_batched || (!more && skb_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + return; > + } > + > + spin_lock(&queue->lock); > + if (!more || skb_queue_len(queue) == rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; &gt...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...; +{ > + struct sk_buff_head *queue = &tfile->sk.sk_write_queue; > + struct sk_buff_head process_queue; > + u32 rx_batched = tun->rx_batched; > + bool rcv = false; > + > + if (!rx_batched || (!more && skb_queue_empty(queue))) { > + local_bh_disable(); > + netif_receive_skb(skb); > + local_bh_enable(); > + return; > + } > + > + spin_lock(&queue->lock); > + if (!more || skb_queue_len(queue) == rx_batched) { > + __skb_queue_head_init(&process_queue); > + skb_queue_splice_tail_init(queue, &process_queue); > + rcv = true; &gt...
2013 Sep 12
15
large packet support in netfront driver and guest network throughput
Hi All, I am sure this has been answered somewhere in the list in the past, but I can''t find it. I was wondering if the linux guest netfront driver has GRO support in it. tcpdump shows packets coming in with 1500 bytes, although the eth0 in dom0 and the vif corresponding to the linux guest in dom0 is showing that they receive large packet: In dom0: eth0 Link encap:Ethernet HWaddr
2005 Dec 05
11
Xen 3.0 and Hyperthreading an issue?
Just gave 3.0 a spin. Had been running 2.0.7 for the past 3 months or so without problems (aside from intermittent failure during live migration). Anyway, 3.0 seems to have an issue with my machine. It starts up the 4 domains that I''ve got defined (was running 6 user domains with 2.0.7, but two of those were running 2.4 kernels which I can''t seem to build with Xen 3.0 yet, and
2014 Mar 30
2
what is the driver of vm's virtual ethernet?
hi,all each port of bridge, has its packets process function called br_handle_frame. i want to know before this function called who and how it get the packets? if it is a real physical ethernet, it must be its driver, but for virtual ethernet , what is the driver? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL:
2014 Mar 30
2
what is the driver of vm's virtual ethernet?
hi,all each port of bridge, has its packets process function called br_handle_frame. i want to know before this function called who and how it get the packets? if it is a real physical ethernet, it must be its driver, but for virtual ethernet , what is the driver? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL:
2014 Mar 30
2
what is the driver of vm's virtual ethernet?
hi,all each port of bridge, has its packets process function called br_handle_frame. i want to know before this function called who and how it get the packets? if it is a real physical ethernet, it must be its driver, but for virtual ethernet , what is the driver? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL:
2007 Jun 13
2
HTB deadlock
...ish_output+0x0/0x192 [<c029dfef>] ip_forward+0x1c8/0x2b9 [<c029ddf0>] ip_forward_finish+0x0/0x37 [<c029c962>] ip_rcv+0x2a5/0x538 [<c029c100>] ip_rcv_finish+0x0/0x2aa [<c027f3bc>] __netdev_alloc_skb+0x12/0x2a [<c029c6bd>] ip_rcv+0x0/0x538 [<c0282a1e>] netif_receive_skb+0x218/0x318 [<c0270008>] bitmap_get_counter+0x41/0x1e6 [<f8a6146d>] e1000_clean_rx_irq+0x12c/0x4ef [e1000] [<f8a61341>] e1000_clean_rx_irq+0x0/0x4ef [e1000] [<f8a60612>] e1000_clean+0xe5/0x130 [e1000] [<c0284573>] net_rx_action+0xbc/0x1d5 [<c0123315>] __do_...