search for: this_cpu_inc

Displaying 20 results from an estimated 31 matches for "this_cpu_inc".

2016 Jun 08
7
[PATCH 0/6] virtio_net: use common code for virtio_net_hdr and skb GSO conversion
Hi, This patches introduce virtio_net_hdr_{from,to}_skb functions for conversion of GSO information between skb and virtio_net_hdr. Mike Rapoport (6): virtio_net: add _UAPI prefix to virtio_net header guards virtio_net: introduce virtio_net_hdr_{from,to}_skb macvtap: use common code for virtio_net_hdr and skb GSO conversion tuntap: use common code for virtio_net_hdr and skb GSO
2016 Jun 08
7
[PATCH 0/6] virtio_net: use common code for virtio_net_hdr and skb GSO conversion
Hi, This patches introduce virtio_net_hdr_{from,to}_skb functions for conversion of GSO information between skb and virtio_net_hdr. Mike Rapoport (6): virtio_net: add _UAPI prefix to virtio_net header guards virtio_net: introduce virtio_net_hdr_{from,to}_skb macvtap: use common code for virtio_net_hdr and skb GSO conversion tuntap: use common code for virtio_net_hdr and skb GSO
2016 Jun 06
2
[PATCH v5 1/6] qspinlock: powerpc support qspinlock
On Fri, Jun 03, 2016 at 02:33:47PM +1000, Benjamin Herrenschmidt wrote: > ?- For the above, can you show (or describe) where the qspinlock > ? ?improves things compared to our current locks. So currently PPC has a fairly straight forward test-and-set spinlock IIRC. You have this because LPAR/virt muck and lock holder preemption issues etc.. qspinlock is 1) a fair lock (like ticket locks)
2016 Jun 06
2
[PATCH v5 1/6] qspinlock: powerpc support qspinlock
On Fri, Jun 03, 2016 at 02:33:47PM +1000, Benjamin Herrenschmidt wrote: > ?- For the above, can you show (or describe) where the qspinlock > ? ?improves things compared to our current locks. So currently PPC has a fairly straight forward test-and-set spinlock IIRC. You have this because LPAR/virt muck and lock holder preemption issues etc.. qspinlock is 1) a fair lock (like ticket locks)
2018 Sep 06
2
[PATCH net-next 04/11] tuntap: simplify error handling in tun_build_skb()
...> - put_page(alloc_frag->page); > err_xdp: > + alloc_frag->offset -= buflen; > + put_page(alloc_frag->page); > +out: Out here isn't an error at all, is it? You should not mix return and error handling IMHO. > rcu_read_unlock(); > local_bh_enable(); > - this_cpu_inc(tun->pcpu_stats->rx_dropped); Doesn't this break rx_dropped accounting? > - return NULL; > + return skb; > } > > /* Get packet from user space buffer */ > -- > 2.17.1
2018 Sep 06
2
[PATCH net-next 04/11] tuntap: simplify error handling in tun_build_skb()
...> - put_page(alloc_frag->page); > err_xdp: > + alloc_frag->offset -= buflen; > + put_page(alloc_frag->page); > +out: Out here isn't an error at all, is it? You should not mix return and error handling IMHO. > rcu_read_unlock(); > local_bh_enable(); > - this_cpu_inc(tun->pcpu_stats->rx_dropped); Doesn't this break rx_dropped accounting? > - return NULL; > + return skb; > } > > /* Get packet from user space buffer */ > -- > 2.17.1
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
...page); > skb = ERR_PTR(-ENOMEM); > goto out; > } > > - skb_reserve(skb, pad - delta); > + skb_reserve(skb, pad); > skb_put(skb, len); > > return skb; > > err_xdp: > - alloc_frag->offset -= buflen; > - put_page(alloc_frag->page); > + this_cpu_inc(tun->pcpu_stats->rx_dropped); This fixes bug in previous patch which dropped it. OK :) > out: > rcu_read_unlock(); > local_bh_enable(); > -- > 2.17.1
2018 Sep 06
2
[PATCH net-next 06/11] tuntap: split out XDP logic
...page); > skb = ERR_PTR(-ENOMEM); > goto out; > } > > - skb_reserve(skb, pad - delta); > + skb_reserve(skb, pad); > skb_put(skb, len); > > return skb; > > err_xdp: > - alloc_frag->offset -= buflen; > - put_page(alloc_frag->page); > + this_cpu_inc(tun->pcpu_stats->rx_dropped); This fixes bug in previous patch which dropped it. OK :) > out: > rcu_read_unlock(); > local_bh_enable(); > -- > 2.17.1
2018 Sep 06
0
[PATCH net-next 04/11] tuntap: simplify error handling in tun_build_skb()
...b, pad - delta); skb_put(skb, len); - get_page(alloc_frag->page); - alloc_frag->offset += buflen; return skb; -err_redirect: - put_page(alloc_frag->page); err_xdp: + alloc_frag->offset -= buflen; + put_page(alloc_frag->page); +out: rcu_read_unlock(); local_bh_enable(); - this_cpu_inc(tun->pcpu_stats->rx_dropped); - return NULL; + return skb; } /* Get packet from user space buffer */ -- 2.17.1
2018 Sep 06
0
[PATCH net-next 06/11] tuntap: split out XDP logic
...kb = build_skb(buf, buflen); if (!skb) { + put_page(alloc_frag->page); skb = ERR_PTR(-ENOMEM); goto out; } - skb_reserve(skb, pad - delta); + skb_reserve(skb, pad); skb_put(skb, len); return skb; err_xdp: - alloc_frag->offset -= buflen; - put_page(alloc_frag->page); + this_cpu_inc(tun->pcpu_stats->rx_dropped); out: rcu_read_unlock(); local_bh_enable(); -- 2.17.1
2018 Sep 07
0
[PATCH net-next 04/11] tuntap: simplify error handling in tun_build_skb()
...e(alloc_frag->page); >> +out: > Out here isn't an error at all, is it? You should not mix return and > error handling IMHO. If you mean the name, I can rename the label to "drop". > > > >> rcu_read_unlock(); >> local_bh_enable(); >> - this_cpu_inc(tun->pcpu_stats->rx_dropped); > Doesn't this break rx_dropped accounting? Let me fix this. Thanks >> - return NULL; >> + return skb; >> } >> >> /* Get packet from user space buffer */ >> -- >> 2.17.1
2018 Sep 06
0
[PATCH net-next 09/11] tuntap: accept an array of XDP buffs through sendmsg()
...+ } + +build: + skb = build_skb(xdp->data_hard_start, buflen); + if (!skb) { + err = -ENOMEM; + goto out; + } + + skb_reserve(skb, xdp->data - xdp->data_hard_start); + skb_put(skb, xdp->data_end - xdp->data); + + if (virtio_net_hdr_to_skb(skb, gso, tun_is_little_endian(tun))) { + this_cpu_inc(tun->pcpu_stats->rx_frame_errors); + kfree_skb(skb); + err = -EINVAL; + goto out; + } + + skb->protocol = eth_type_trans(skb, tun->dev); + skb_reset_network_header(skb); + skb_probe_transport_header(skb, 0); + + if (skb_xdp) { + err = do_xdp_generic(xdp_prog, skb); + if (err != XDP...
2018 Sep 07
0
[PATCH net-next 06/11] tuntap: split out XDP logic
...; >> } >> >> - skb_reserve(skb, pad - delta); >> + skb_reserve(skb, pad); >> skb_put(skb, len); >> >> return skb; >> >> err_xdp: >> - alloc_frag->offset -= buflen; >> - put_page(alloc_frag->page); >> + this_cpu_inc(tun->pcpu_stats->rx_dropped); > > This fixes bug in previous patch which dropped it. OK :) Yes, but let me move this to the buggy patch. Thanks >> out: >> rcu_read_unlock(); >> local_bh_enable(); >> -- >> 2.17.1
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 06
1
[PATCH net-next 09/11] tuntap: accept an array of XDP buffs through sendmsg()
..._start, buflen); > + if (!skb) { > + err = -ENOMEM; > + goto out; > + } > + > + skb_reserve(skb, xdp->data - xdp->data_hard_start); > + skb_put(skb, xdp->data_end - xdp->data); > + > + if (virtio_net_hdr_to_skb(skb, gso, tun_is_little_endian(tun))) { > + this_cpu_inc(tun->pcpu_stats->rx_frame_errors); > + kfree_skb(skb); > + err = -EINVAL; > + goto out; > + } > + > + skb->protocol = eth_type_trans(skb, tun->dev); > + skb_reset_network_header(skb); > + skb_probe_transport_header(skb, 0); > + > + if (skb_xdp) { > +...
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2017 Dec 19
5
[RFC PATCH] virtio_net: Extend virtio to use VF datapath when available
...CESS || rc == NET_XMIT_CN)) { + struct virtnet_vf_pcpu_stats *pcpu_stats + = this_cpu_ptr(vi->vf_stats); + + u64_stats_update_begin(&pcpu_stats->syncp); + pcpu_stats->tx_packets++; + pcpu_stats->tx_bytes += len; + u64_stats_update_end(&pcpu_stats->syncp); + } else { + this_cpu_inc(vi->vf_stats->tx_dropped); + } + + return rc; +} + static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); int qnum = skb_get_queue_mapping(skb); struct send_queue *sq = &vi->sq[qnum]; + struct net_device *vf_netd...
2017 Dec 19
5
[RFC PATCH] virtio_net: Extend virtio to use VF datapath when available
...CESS || rc == NET_XMIT_CN)) { + struct virtnet_vf_pcpu_stats *pcpu_stats + = this_cpu_ptr(vi->vf_stats); + + u64_stats_update_begin(&pcpu_stats->syncp); + pcpu_stats->tx_packets++; + pcpu_stats->tx_bytes += len; + u64_stats_update_end(&pcpu_stats->syncp); + } else { + this_cpu_inc(vi->vf_stats->tx_dropped); + } + + return rc; +} + static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); int qnum = skb_get_queue_mapping(skb); struct send_queue *sq = &vi->sq[qnum]; + struct net_device *vf_netd...
2018 Jan 12
0
[RFC PATCH net-next v2 2/2] virtio_net: Extend virtio to use VF datapath when available
...CESS || rc == NET_XMIT_CN)) { + struct virtnet_vf_pcpu_stats *pcpu_stats + = this_cpu_ptr(vi->vf_stats); + + u64_stats_update_begin(&pcpu_stats->syncp); + pcpu_stats->tx_packets++; + pcpu_stats->tx_bytes += len; + u64_stats_update_end(&pcpu_stats->syncp); + } else { + this_cpu_inc(vi->vf_stats->tx_dropped); + } + + return rc; +} + static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); int qnum = skb_get_queue_mapping(skb); struct send_queue *sq = &vi->sq[qnum]; + struct net_device *vf_netd...