Displaying 13 results from an estimated 13 matches for "sock_has_rx_data".
Did you mean:
sk_has_rx_data
2018 Sep 09
0
[PATCH net-next v8 3/7] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...> >
> > Factor out generic busy polling logic and will be
> > used for in tx path in the next patch. And with the patch,
> > qemu can set differently the busyloop_timeout for rx queue.
> >
> > To avoid duplicate codes, introduce the helper functions:
> > * sock_has_rx_data(changed from sk_has_rx_data)
> > * vhost_net_busy_poll_try_queue
> >
> > Signed-off-by: Tonghao Zhang <xiangxia.m.yue at gmail.com>
> > ---
> > drivers/vhost/net.c | 111 +++++++++++++++++++++++++++++++++-------------------
> > 1 file changed, 71 inserti...
2018 Sep 09
7
[PATCH net-next v9 0/6] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.
For more performance report, see patch 4, 5, 6
Tonghao Zhang (6):
net: vhost: lock the vqs one by one
net: vhost: replace magic number of lock annotation
net: vhost: factor out
2018 Sep 09
7
[PATCH net-next v9 0/6] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.
For more performance report, see patch 4, 5, 6
Tonghao Zhang (6):
net: vhost: lock the vqs one by one
net: vhost: replace magic number of lock annotation
net: vhost: factor out
2018 Sep 25
6
[REBASE PATCH net-next v9 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.
For more performance report, see patch 4
Tonghao Zhang (4):
net: vhost: lock the vqs one by one
net: vhost: replace magic number of lock annotation
net: vhost: factor out busy
2018 Dec 11
2
[PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling
...e we want to poll both tx and rx virtqueue at the same time
> (vhost_net_busy_poll()).
>
> ??? while (vhost_can_busy_poll(endtime)) {
> ?? ??? ?if (vhost_has_work(&net->dev)) {
> ?? ??? ??? ?*busyloop_intr = true;
> ?? ??? ??? ?break;
> ?? ??? ?}
>
> ?? ??? ?if ((sock_has_rx_data(sock) &&
> ?? ??? ????? !vhost_vq_avail_empty(&net->dev, rvq)) ||
> ?? ??? ???? !vhost_vq_avail_empty(&net->dev, tvq))
> ?? ??? ??? ?break;
>
> ?? ??? ?cpu_relax();
>
> ?? ?}
>
>
> And we disable kicks and notification for better performance....
2018 Dec 11
2
[PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling
...e we want to poll both tx and rx virtqueue at the same time
> (vhost_net_busy_poll()).
>
> ??? while (vhost_can_busy_poll(endtime)) {
> ?? ??? ?if (vhost_has_work(&net->dev)) {
> ?? ??? ??? ?*busyloop_intr = true;
> ?? ??? ??? ?break;
> ?? ??? ?}
>
> ?? ??? ?if ((sock_has_rx_data(sock) &&
> ?? ??? ????? !vhost_vq_avail_empty(&net->dev, rvq)) ||
> ?? ??? ???? !vhost_vq_avail_empty(&net->dev, tvq))
> ?? ??? ??? ?break;
>
> ?? ??? ?cpu_relax();
>
> ?? ?}
>
>
> And we disable kicks and notification for better performance....
2020 Jun 01
0
[PATCH net-next v8 7/7] net: vhost: make busyloop_intr more accurate
...st_can_busy_poll(endtime)) {
> > > -??????? if (vhost_has_work(&net->dev)) {
> > > -??????????? *busyloop_intr = true;
> > > +??????? if (vhost_has_work(&net->dev))
> > > ????????????? break;
> > > -??????? }
> > > ? ????????? if ((sock_has_rx_data(sock) &&
> > > ?????????????? !vhost_vq_avail_empty(&net->dev, rvq)) ||
> > > @@ -513,6 +511,11 @@ static void vhost_net_busy_poll(struct
> > > vhost_net *net,
> > > ????????? !vhost_has_work_pending(&net->dev, VHOST_NET_VQ_RX))
> > &g...
2018 Dec 11
2
[PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling
On Mon, Dec 10, 2018 at 05:44:52PM +0800, Jason Wang wrote:
> When we try to do rx busy polling in tx path in commit 441abde4cd84
> ("net: vhost: add rx busy polling in tx path"), we lock rx vq mutex
> after tx vq mutex is held. This may lead deadlock so we try to lock vq
> one by one in commit 78139c94dc8c ("net: vhost: lock the vqs one by
> one"). With this
2018 Dec 11
2
[PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling
On Mon, Dec 10, 2018 at 05:44:52PM +0800, Jason Wang wrote:
> When we try to do rx busy polling in tx path in commit 441abde4cd84
> ("net: vhost: add rx busy polling in tx path"), we lock rx vq mutex
> after tx vq mutex is held. This may lead deadlock so we try to lock vq
> one by one in commit 78139c94dc8c ("net: vhost: lock the vqs one by
> one"). With this
2018 Dec 11
0
[PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling
...g the tx lock and vice versa?
Because we want to poll both tx and rx virtqueue at the same time
(vhost_net_busy_poll()).
??? while (vhost_can_busy_poll(endtime)) {
?? ??? ?if (vhost_has_work(&net->dev)) {
?? ??? ??? ?*busyloop_intr = true;
?? ??? ??? ?break;
?? ??? ?}
?? ??? ?if ((sock_has_rx_data(sock) &&
?? ??? ????? !vhost_vq_avail_empty(&net->dev, rvq)) ||
?? ??? ???? !vhost_vq_avail_empty(&net->dev, tvq))
?? ??? ??? ?break;
?? ??? ?cpu_relax();
?? ?}
And we disable kicks and notification for better performance.
>
> Or if we really wanted to force e...
2018 Dec 12
0
[PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling
...ue at the same time
>> (vhost_net_busy_poll()).
>>
>> ??? while (vhost_can_busy_poll(endtime)) {
>> ?? ??? ?if (vhost_has_work(&net->dev)) {
>> ?? ??? ??? ?*busyloop_intr = true;
>> ?? ??? ??? ?break;
>> ?? ??? ?}
>>
>> ?? ??? ?if ((sock_has_rx_data(sock) &&
>> ?? ??? ????? !vhost_vq_avail_empty(&net->dev, rvq)) ||
>> ?? ??? ???? !vhost_vq_avail_empty(&net->dev, tvq))
>> ?? ??? ??? ?break;
>>
>> ?? ??? ?cpu_relax();
>>
>> ?? ?}
>>
>>
>> And we disable kicks...
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues which were described
at [1]. In this version we try to address the performance regression
saw by V2. The root cause is packed virtqueue need more times of
userspace memory accesssing which turns out to be very
expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq
metadata through kernel virtual address"), such overhead cold be
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues which were described
at [1]. In this version we try to address the performance regression
saw by V2. The root cause is packed virtqueue need more times of
userspace memory accesssing which turns out to be very
expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq
metadata through kernel virtual address"), such overhead cold be