search for: netif_tx_wake_queu

Displaying 18 results from an estimated 18 matches for "netif_tx_wake_queu".

Did you mean: netif_tx_wake_queue
2021 May 26
6
[PATCH v3 0/4] virtio net: spurious interrupt related fixes
With the implementation of napi-tx in virtio driver, we clean tx descriptors from rx napi handler, for the purpose of reducing tx complete interrupts. But this introduces a race where tx complete interrupt has been raised, but the handler finds there is no work to do because we have done the work in the previous rx interrupt handler. A similar issue exists with polling from start_xmit, it is
2021 May 26
6
[PATCH v3 0/4] virtio net: spurious interrupt related fixes
With the implementation of napi-tx in virtio driver, we clean tx descriptors from rx napi handler, for the purpose of reducing tx complete interrupts. But this introduces a race where tx complete interrupt has been raised, but the handler finds there is no work to do because we have done the work in the previous rx interrupt handler. A similar issue exists with polling from start_xmit, it is
2018 Sep 13
5
[PATCH net-next V2] virtio_net: ethtool tx napi configuration
...virtqueue_napi_complete(napi, sq->vq, 0); - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) + /* Check napi.weight to avoid tx stall since it could be set + * to zero by ethtool after skb_xmit_done(). + */ + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS || !sq->napi.weight) netif_tx_wake_queue(txq); return 0; @@ -2181,6 +2186,61 @@ static int virtnet_get_link_ksettings(struct net_device *dev, return 0; } +static int virtnet_set_coalesce(struct net_device *dev, + struct ethtool_coalesce *ec) +{ + struct ethtool_coalesce ec_default = { + .cmd = ETHTOOL_SCOALESCE, + .rx_max_c...
2018 Sep 13
5
[PATCH net-next V2] virtio_net: ethtool tx napi configuration
...virtqueue_napi_complete(napi, sq->vq, 0); - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) + /* Check napi.weight to avoid tx stall since it could be set + * to zero by ethtool after skb_xmit_done(). + */ + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS || !sq->napi.weight) netif_tx_wake_queue(txq); return 0; @@ -2181,6 +2186,61 @@ static int virtnet_get_link_ksettings(struct net_device *dev, return 0; } +static int virtnet_set_coalesce(struct net_device *dev, + struct ethtool_coalesce *ec) +{ + struct ethtool_coalesce ec_default = { + .cmd = ETHTOOL_SCOALESCE, + .rx_max_c...
2023 Apr 16
4
[PATCH net] virtio-net: reject small vring sizes
...because it may result in attempting to transmit a packet with more fragments than there are descriptors in the ring. Furthermore, it leads to an immediate bug: The condition: (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) in virtnet_poll_cleantx and virtnet_poll_tx always evaluates to false, so netif_tx_wake_queue is not called, leading to TX timeouts. Signed-off-by: Alvaro Karsz <alvaro.karsz at solid-run.com> --- drivers/net/virtio_net.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 2396c28c012..59676252c5c 1...
2018 Sep 13
0
[PATCH net-next V2] virtio_net: ethtool tx napi configuration
...;vq->num_free >= 2 + MAX_SKB_FRAGS) > + /* Check napi.weight to avoid tx stall since it could be set > + * to zero by ethtool after skb_xmit_done(). > + */ > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS || !sq->napi.weight) > netif_tx_wake_queue(txq); I see. This assumes that the napi handler will always be called on conversion from napi to no-napi mode. That is safe to assume because if it isn't called (and will not call netif_tx_wake_queue) that implies that napi was not scheduled, and thus the tx interrupt was not suppressed and...
2023 Apr 30
1
[RFC PATCH net 2/3] virtio-net: allow usage of vrings smaller than MAX_SKB_FRAGS + 2
...828,7 +1844,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq) free_old_xmit_skbs(sq, true); } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq))); - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) + if (sq->vq->num_free >= vi->single_pkt_max_descs) netif_tx_wake_queue(txq); __netif_tx_unlock(txq); @@ -1919,7 +1935,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) virtqueue_disable_cb(sq->vq); free_old_xmit_skbs(sq, true); - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) + if (sq->vq->num_free >= vi->single_pkt_...
2023 Apr 30
1
[RFC PATCH net 2/3] virtio-net: allow usage of vrings smaller than MAX_SKB_FRAGS + 2
...irtnet_poll_cleantx(struct receive_queue *rq) > free_old_xmit_skbs(sq, true); > } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq))); > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > + if (sq->vq->num_free >= vi->single_pkt_max_descs) > netif_tx_wake_queue(txq); > > __netif_tx_unlock(txq); > @@ -1919,7 +1935,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > virtqueue_disable_cb(sq->vq); > free_old_xmit_skbs(sq, true); > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > + if (sq->...
2017 Apr 20
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...dev_get_tx_queue(vi->dev, vq2txq(sq->vq)); > + > + if (__netif_tx_trylock(txq)) { > + free_old_xmit_skbs(sq); > + __netif_tx_unlock(txq); > + } > + > + virtqueue_napi_complete(napi, sq->vq, 0); > + > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > + netif_tx_wake_queue(txq); > + > + return 0; > +} > + > static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) > { > struct virtio_net_hdr_mrg_rxbuf *hdr; > @@ -1130,9 +1172,11 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) > int err; >...
2017 Apr 18
8
[PATCH net-next v2 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v1 -> v2: - disable by default - disable unless affinity_hint_set because cache misses add up to a third higher cycle cost, e.g., in TCP_RR tests. This is not limited to the patch that enables tx completion cleaning in rx napi. - use trylock to
2017 Apr 18
8
[PATCH net-next v2 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v1 -> v2: - disable by default - disable unless affinity_hint_set because cache misses add up to a third higher cycle cost, e.g., in TCP_RR tests. This is not limited to the patch that enables tx completion cleaning in rx napi. - use trylock to
2023 Apr 30
5
[RFC PATCH net 0/3] virtio-net: allow usage of small vrings
At the moment, if a virtio network device uses vrings with less than MAX_SKB_FRAGS + 2 entries, the device won't be functional. The following condition vq->num_free >= 2 + MAX_SKB_FRAGS will always evaluate to false, leading to TX timeouts. This patchset attempts this fix this bug, and to allow small rings down to 4 entries. The first patch introduces a new mechanism in virtio core -
2017 Apr 24
0
[PATCH net-next v3 2/5] virtio-net: transmit napi
...gt;priv; + struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq)); + + __netif_tx_lock(txq, raw_smp_processor_id()); + free_old_xmit_skbs(sq); + __netif_tx_unlock(txq); + + virtqueue_napi_complete(napi, sq->vq, 0); + + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) + netif_tx_wake_queue(txq); + + return 0; +} + static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) { struct virtio_net_hdr_mrg_rxbuf *hdr; @@ -1130,6 +1174,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) int err; struct netdev_queue *txq = netdev_get_tx_queue(dev, qnu...
2017 Apr 18
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...gt;vdev->priv; + struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq)); + + if (__netif_tx_trylock(txq)) { + free_old_xmit_skbs(sq); + __netif_tx_unlock(txq); + } + + virtqueue_napi_complete(napi, sq->vq, 0); + + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) + netif_tx_wake_queue(txq); + + return 0; +} + static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) { struct virtio_net_hdr_mrg_rxbuf *hdr; @@ -1130,9 +1172,11 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) int err; struct netdev_queue *txq = netdev_get_tx_queue(dev, qn...
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v2 -> v3: - convert __netif_tx_trylock to __netif_tx_lock on tx napi poll ensure that the handler always cleans, to avoid deadlock - unconditionally clean in start_xmit avoid adding an unnecessary "if (use_napi)" branch - remove
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v2 -> v3: - convert __netif_tx_trylock to __netif_tx_lock on tx napi poll ensure that the handler always cleans, to avoid deadlock - unconditionally clean in start_xmit avoid adding an unnecessary "if (use_napi)" branch - remove
2017 Jan 05
3
[PATCH net-next] net: make ndo_get_stats64 a void function
...l_link_stats64 *); static int fjes_change_mtu(struct net_device *, int); static int fjes_vlan_rx_add_vid(struct net_device *, __be16 proto, u16); static int fjes_vlan_rx_kill_vid(struct net_device *, __be16 proto, u16); @@ -782,14 +781,12 @@ static void fjes_tx_retry(struct net_device *netdev) netif_tx_wake_queue(queue); } -static struct rtnl_link_stats64 * +static void fjes_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats) { struct fjes_adapter *adapter = netdev_priv(netdev); memcpy(stats, &adapter->stats64, sizeof(struct rtnl_link_stats64)); - - return stats; }...
2017 Jan 05
3
[PATCH net-next] net: make ndo_get_stats64 a void function
...l_link_stats64 *); static int fjes_change_mtu(struct net_device *, int); static int fjes_vlan_rx_add_vid(struct net_device *, __be16 proto, u16); static int fjes_vlan_rx_kill_vid(struct net_device *, __be16 proto, u16); @@ -782,14 +781,12 @@ static void fjes_tx_retry(struct net_device *netdev) netif_tx_wake_queue(queue); } -static struct rtnl_link_stats64 * +static void fjes_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats) { struct fjes_adapter *adapter = netdev_priv(netdev); memcpy(stats, &adapter->stats64, sizeof(struct rtnl_link_stats64)); - - return stats; }...