Displaying 12 results from an estimated 12 matches for "exceessive".
Did you mean:
excessive
2019 Jul 18
2
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
On 2019/7/18 ??9:04, Michael S. Tsirkin wrote:
> On Thu, Jul 18, 2019 at 12:55:50PM +0000, ? jiang wrote:
>> This change makes ring buffer reclaim threshold num_free configurable
>> for better performance, while it's hard coded as 1/2 * queue now.
>> According to our test with qemu + dpdk, packet dropping happens when
>> the guest is not able to provide free buffer
2019 Jul 18
2
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
On 2019/7/18 ??9:04, Michael S. Tsirkin wrote:
> On Thu, Jul 18, 2019 at 12:55:50PM +0000, ? jiang wrote:
>> This change makes ring buffer reclaim threshold num_free configurable
>> for better performance, while it's hard coded as 1/2 * queue now.
>> According to our test with qemu + dpdk, packet dropping happens when
>> the guest is not able to provide free buffer
2019 Jul 18
4
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...th demonstrated numbers or
> > something smarter.
> >
> > Thanks
>
> I do however think that we have a problem right now: try_fill_recv can
> take up a long time during which net stack does not run at all. Imagine
> a 1K queue - we are talking 512 packets. That's exceessive. napi poll
> weight solves a similar problem, so it might make sense to cap this at
> napi_poll_weight.
>
> Which will allow tweaking it through a module parameter as a
> side effect :) Maybe just do NAPI_POLL_WEIGHT.
Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Pl...
2019 Jul 18
4
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...th demonstrated numbers or
> > something smarter.
> >
> > Thanks
>
> I do however think that we have a problem right now: try_fill_recv can
> take up a long time during which net stack does not run at all. Imagine
> a 1K queue - we are talking 512 packets. That's exceessive. napi poll
> weight solves a similar problem, so it might make sense to cap this at
> napi_poll_weight.
>
> Which will allow tweaking it through a module parameter as a
> side effect :) Maybe just do NAPI_POLL_WEIGHT.
Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Pl...
2006 Oct 13
1
HFSC question??
1. HFSC have 4 curve such sc, rc, ls, ul and
1.1 In leaf class can specify rc for guarantee service (bandwidth and delay)
and If want to sharing fairness exceess service, we must specify ls and ul curve too
(ls curve with paramater m2 specify at lease sharing bandwidth in that class will receive and
ul curve mean maximum bandwidth in that class will receive)
so i''m doubt .. about if i
2019 Jul 18
0
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...od. You need either a good value with demonstrated numbers or
> something smarter.
>
> Thanks
I do however think that we have a problem right now: try_fill_recv can
take up a long time during which net stack does not run at all. Imagine
a 1K queue - we are talking 512 packets. That's exceessive. napi poll
weight solves a similar problem, so it might make sense to cap this at
napi_poll_weight.
Which will allow tweaking it through a module parameter as a
side effect :) Maybe just do NAPI_POLL_WEIGHT.
Need to be careful though: queues can also be small and I don't think we
want to exc...
2019 Jul 19
0
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...d numbers or
>>> something smarter.
>>>
>>> Thanks
>> I do however think that we have a problem right now: try_fill_recv can
>> take up a long time during which net stack does not run at all. Imagine
>> a 1K queue - we are talking 512 packets. That's exceessive.
Yes, we will starve a fast host in this case.
>> napi poll
>> weight solves a similar problem, so it might make sense to cap this at
>> napi_poll_weight.
>>
>> Which will allow tweaking it through a module parameter as a
>> side effect :) Maybe just do NA...
2019 Jul 23
2
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...anks
>>>>>> I do however think that we have a problem right now: try_fill_recv can
>>>>>> take up a long time during which net stack does not run at all.
>>>>>> Imagine
>>>>>> a 1K queue - we are talking 512 packets. That's exceessive.
>>>>
>>>> Yes, we will starve a fast host in this case.
>>>>
>>>>
>>>>>> ?? napi poll
>>>>>> weight solves a similar problem, so it might make sense to cap this at
>>>>>> napi_poll_weight.
>&g...
2019 Jul 23
2
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...anks
>>>>>> I do however think that we have a problem right now: try_fill_recv can
>>>>>> take up a long time during which net stack does not run at all.
>>>>>> Imagine
>>>>>> a 1K queue - we are talking 512 packets. That's exceessive.
>>>>
>>>> Yes, we will starve a fast host in this case.
>>>>
>>>>
>>>>>> ?? napi poll
>>>>>> weight solves a similar problem, so it might make sense to cap this at
>>>>>> napi_poll_weight.
>&g...
2019 Jul 19
1
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...gt;>
>>>>> Thanks
>>>> I do however think that we have a problem right now: try_fill_recv can
>>>> take up a long time during which net stack does not run at all.
>>>> Imagine
>>>> a 1K queue - we are talking 512 packets. That's exceessive.
>>
>>
>> Yes, we will starve a fast host in this case.
>>
>>
>>>> ?? napi poll
>>>> weight solves a similar problem, so it might make sense to cap this at
>>>> napi_poll_weight.
>>>>
>>>> Which will allow twe...
2019 Jul 19
0
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...>> Thanks
> >>>> I do however think that we have a problem right now: try_fill_recv can
> >>>> take up a long time during which net stack does not run at all.
> >>>> Imagine
> >>>> a 1K queue - we are talking 512 packets. That's exceessive.
> >>
> >>
> >> Yes, we will starve a fast host in this case.
> >>
> >>
> >>>> ?? napi poll
> >>>> weight solves a similar problem, so it might make sense to cap this at
> >>>> napi_poll_weight.
> >>&...
2019 Aug 13
0
[PATCH] virtio-net: parameterize min ring num_free for virtio receive
...t;>>> I do however think that we have a problem right now: try_fill_recv can
> >>>>>> take up a long time during which net stack does not run at all.
> >>>>>> Imagine
> >>>>>> a 1K queue - we are talking 512 packets. That's exceessive.
> >>>>
> >>>> Yes, we will starve a fast host in this case.
> >>>>
> >>>>
> >>>>>> ?? napi poll
> >>>>>> weight solves a similar problem, so it might make sense to cap this at
> >>>&g...