Displaying 20 results from an estimated 131 matches for "napi_poll_weight".
2019 Jul 18
4
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...ver think that we have a problem right now: try_fill_recv can
> take up a long time during which net stack does not run at all. Imagine
> a 1K queue - we are talking 512 packets. That's exceessive. napi poll
> weight solves a similar problem, so it might make sense to cap this at
> napi_poll_weight.
>
> Which will allow tweaking it through a module parameter as a
> side effect :) Maybe just do NAPI_POLL_WEIGHT.
Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
experiment, measure performance and let the list know
> Need to be careful though: queues can also be...
2019 Jul 18
4
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...ver think that we have a problem right now: try_fill_recv can
> take up a long time during which net stack does not run at all. Imagine
> a 1K queue - we are talking 512 packets. That's exceessive. napi poll
> weight solves a similar problem, so it might make sense to cap this at
> napi_poll_weight.
>
> Which will allow tweaking it through a module parameter as a
> side effect :) Maybe just do NAPI_POLL_WEIGHT.
Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
experiment, measure performance and let the list know
> Need to be careful though: queues can also be...
2019 Jul 18
2
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
On 2019/7/18 ??9:04, Michael S. Tsirkin wrote:
> On Thu, Jul 18, 2019 at 12:55:50PM +0000, ? jiang wrote:
>> This change makes ring buffer reclaim threshold num_free configurable
>> for better performance, while it's hard coded as 1/2 * queue now.
>> According to our test with qemu + dpdk, packet dropping happens when
>> the guest is not able to provide free buffer
2019 Jul 18
2
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
On 2019/7/18 ??9:04, Michael S. Tsirkin wrote:
> On Thu, Jul 18, 2019 at 12:55:50PM +0000, ? jiang wrote:
>> This change makes ring buffer reclaim threshold num_free configurable
>> for better performance, while it's hard coded as 1/2 * queue now.
>> According to our test with qemu + dpdk, packet dropping happens when
>> the guest is not able to provide free buffer
2017 Jan 09
1
[PATCH V4 net-next 3/3] tun: rx batching
...t; > +}
> > > +
> > > +static int tun_set_coalesce(struct net_device *dev,
> > > + struct ethtool_coalesce *ec)
> > > +{
> > > + struct tun_struct *tun = netdev_priv(dev);
> > > +
> > > + if (ec->rx_max_coalesced_frames > NAPI_POLL_WEIGHT)
> > > + return -EINVAL;
> > So what should userspace do? Keep trying until it succeeds?
> > I think it's better to just use NAPI_POLL_WEIGHT instead and DTRT here.
> >
>
> Well, looking at how set_coalesce is implemented in other drivers, -EINVAL
> is usu...
2017 Jan 09
1
[PATCH V4 net-next 3/3] tun: rx batching
...t; > +}
> > > +
> > > +static int tun_set_coalesce(struct net_device *dev,
> > > + struct ethtool_coalesce *ec)
> > > +{
> > > + struct tun_struct *tun = netdev_priv(dev);
> > > +
> > > + if (ec->rx_max_coalesced_frames > NAPI_POLL_WEIGHT)
> > > + return -EINVAL;
> > So what should userspace do? Keep trying until it succeeds?
> > I think it's better to just use NAPI_POLL_WEIGHT instead and DTRT here.
> >
>
> Well, looking at how set_coalesce is implemented in other drivers, -EINVAL
> is usu...
2019 Jul 19
0
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...p a long time during which net stack does not run at all. Imagine
>> a 1K queue - we are talking 512 packets. That's exceessive.
Yes, we will starve a fast host in this case.
>> napi poll
>> weight solves a similar problem, so it might make sense to cap this at
>> napi_poll_weight.
>>
>> Which will allow tweaking it through a module parameter as a
>> side effect :) Maybe just do NAPI_POLL_WEIGHT.
> Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
> experiment, measure performance and let the list know
>
>> Need to be carefu...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...t; rx-frames = 32 1.07 +17.5%
> rx-frames = 48 1.07 +17.5%
> rx-frames = 64 1.08 +18.6%
> rx-frames = 64 (no MSG_MORE) 0.91 +0%
>
> User were allowed to change per device batched packets through
> ethtool -C rx-frames. NAPI_POLL_WEIGHT were used as upper limitation
> to prevent bh from being disabled too long.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
> ---
> drivers/net/tun.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 70 insertions(+), 6 deletions(-)
>...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...t; rx-frames = 32 1.07 +17.5%
> rx-frames = 48 1.07 +17.5%
> rx-frames = 64 1.08 +18.6%
> rx-frames = 64 (no MSG_MORE) 0.91 +0%
>
> User were allowed to change per device batched packets through
> ethtool -C rx-frames. NAPI_POLL_WEIGHT were used as upper limitation
> to prevent bh from being disabled too long.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
> ---
> drivers/net/tun.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 70 insertions(+), 6 deletions(-)
>...
2019 Jul 23
2
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...s. That's exceessive.
>>>>
>>>> Yes, we will starve a fast host in this case.
>>>>
>>>>
>>>>>> ?? napi poll
>>>>>> weight solves a similar problem, so it might make sense to cap this at
>>>>>> napi_poll_weight.
>>>>>>
>>>>>> Which will allow tweaking it through a module parameter as a
>>>>>> side effect :) Maybe just do NAPI_POLL_WEIGHT.
>>>>> Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
>>>>> exper...
2019 Jul 23
2
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...s. That's exceessive.
>>>>
>>>> Yes, we will starve a fast host in this case.
>>>>
>>>>
>>>>>> ?? napi poll
>>>>>> weight solves a similar problem, so it might make sense to cap this at
>>>>>> napi_poll_weight.
>>>>>>
>>>>>> Which will allow tweaking it through a module parameter as a
>>>>>> side effect :) Maybe just do NAPI_POLL_WEIGHT.
>>>>> Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
>>>>> exper...
2019 Jul 19
1
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...;>> a 1K queue - we are talking 512 packets. That's exceessive.
>>
>>
>> Yes, we will starve a fast host in this case.
>>
>>
>>>> ?? napi poll
>>>> weight solves a similar problem, so it might make sense to cap this at
>>>> napi_poll_weight.
>>>>
>>>> Which will allow tweaking it through a module parameter as a
>>>> side effect :) Maybe just do NAPI_POLL_WEIGHT.
>>> Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
>>> experiment, measure performance and let the l...
2019 Jul 19
0
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...2 packets. That's exceessive.
> >>
> >>
> >> Yes, we will starve a fast host in this case.
> >>
> >>
> >>>> ?? napi poll
> >>>> weight solves a similar problem, so it might make sense to cap this at
> >>>> napi_poll_weight.
> >>>>
> >>>> Which will allow tweaking it through a module parameter as a
> >>>> side effect :) Maybe just do NAPI_POLL_WEIGHT.
> >>> Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
> >>> experiment, measure...
2019 Aug 13
0
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...>>>
> >>>> Yes, we will starve a fast host in this case.
> >>>>
> >>>>
> >>>>>> ?? napi poll
> >>>>>> weight solves a similar problem, so it might make sense to cap this at
> >>>>>> napi_poll_weight.
> >>>>>>
> >>>>>> Which will allow tweaking it through a module parameter as a
> >>>>>> side effect :) Maybe just do NAPI_POLL_WEIGHT.
> >>>>> Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
> &...
2019 Jul 18
0
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...t; Thanks
I do however think that we have a problem right now: try_fill_recv can
take up a long time during which net stack does not run at all. Imagine
a 1K queue - we are talking 512 packets. That's exceessive. napi poll
weight solves a similar problem, so it might make sense to cap this at
napi_poll_weight.
Which will allow tweaking it through a module parameter as a
side effect :) Maybe just do NAPI_POLL_WEIGHT.
Need to be careful though: queues can also be small and I don't think we
want to exceed queue size / 2, or maybe queue size - napi_poll_weight.
Definitely must not exceed the full queu...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
...1.00 +9.8%
rx-frames = 16 1.01 +10.9%
rx-frames = 32 1.07 +17.5%
rx-frames = 48 1.07 +17.5%
rx-frames = 64 1.08 +18.6%
rx-frames = 64 (no MSG_MORE) 0.91 +0%
Changes from V4:
- stick to NAPI_POLL_WEIGHT for rx-frames is user specify a value
greater than it.
Changes from V3:
- use ethtool instead of module parameter to control the maximum
number of batched packets
- avoid overhead when MSG_MORE were not set and no packet queued
Changes from V2:
- remove uselss queue limitation check (and we don...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
...1.00 +9.8%
rx-frames = 16 1.01 +10.9%
rx-frames = 32 1.07 +17.5%
rx-frames = 48 1.07 +17.5%
rx-frames = 64 1.08 +18.6%
rx-frames = 64 (no MSG_MORE) 0.91 +0%
Changes from V4:
- stick to NAPI_POLL_WEIGHT for rx-frames is user specify a value
greater than it.
Changes from V3:
- use ethtool instead of module parameter to control the maximum
number of batched packets
- avoid overhead when MSG_MORE were not set and no packet queued
Changes from V2:
- remove uselss queue limitation check (and we don...
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...= 16 1.01 +10.9%
rx-frames = 32 1.07 +17.5%
rx-frames = 48 1.07 +17.5%
rx-frames = 64 1.08 +18.6%
rx-frames = 64 (no MSG_MORE) 0.91 +0%
User were allowed to change per device batched packets through
ethtool -C rx-frames. NAPI_POLL_WEIGHT were used as upper limitation
to prevent bh from being disabled too long.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/net/tun.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 70 insertions(+), 6 deletions(-)
diff --git a/drivers/net/tun.c b...
2023 Jul 27
2
[PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command
...ck *extack)
> {
> struct virtnet_info *vi = netdev_priv(dev);
> - int ret, i, napi_weight;
> + int ret, queue_number, napi_weight;
> bool update_napi = false;
>
> /* Can't change NAPI weight if the link is up */
> napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0;
> - if (napi_weight ^ vi->sq[0].napi.weight) {
> - if (dev->flags & IFF_UP)
> - return -EBUSY;
> - else
> - update_napi = true;
> + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) {
> + ret = virtnet_should_update_vq_weight...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%