Displaying 6 results from an estimated 6 matches for "spink_lock".
Did you mean:
spin_lock
2014 Feb 12
2
[PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done
...efcount, this may
cause there isn't any thread call vhost_poll_queue, but at least one is
needed. and this cause network break.
We could repeat it by using 8 netperf thread in guest to xmit tcp to its
host.
I think if using atomic_read to decide while do vhost_poll_queue or not,
at least a spink_lock is needed.
2014 Feb 12
2
[PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done
...efcount, this may
cause there isn't any thread call vhost_poll_queue, but at least one is
needed. and this cause network break.
We could repeat it by using 8 netperf thread in guest to xmit tcp to its
host.
I think if using atomic_read to decide while do vhost_poll_queue or not,
at least a spink_lock is needed.
2014 Feb 12
0
[PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done
...ast
> one is needed. and this cause network break.
> We could repeat it by using 8 netperf thread in guest to xmit tcp to
> its host.
Thanks a lot for the report, will send the patch soon.
>
> I think if using atomic_read to decide while do vhost_poll_queue or not,
> at least a spink_lock is needed.
No, nothing so drastic.
2014 Feb 12
0
[PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done
...ere isn't any thread call vhost_poll_queue, but at least one
> is needed. and this cause network break.
> We could repeat it by using 8 netperf thread in guest to xmit tcp to
> its host.
>
> I think if using atomic_read to decide while do vhost_poll_queue or not,
> at least a spink_lock is needed.
Then you need another ref count to protect that spinlock? Care to send
patches?
Thanks
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at http://vger.k...
2013 Aug 30
12
[PATCH V2 0/6] vhost code cleanup and minor enhancement
Hi all:
This series tries to unify and simplify vhost codes especially for
zerocopy. With this series, 5% - 10% improvement for per cpu throughput were
seen during netperf guest sending test.
Plase review.
Changes from V1:
- Fix the zerocopy enabling check by changing the check of upend_idx != done_idx
to (upend_idx + 1) % UIO_MAXIOV == done_idx.
- Switch to use put_user() in
2013 Aug 30
12
[PATCH V2 0/6] vhost code cleanup and minor enhancement
Hi all:
This series tries to unify and simplify vhost codes especially for
zerocopy. With this series, 5% - 10% improvement for per cpu throughput were
seen during netperf guest sending test.
Plase review.
Changes from V1:
- Fix the zerocopy enabling check by changing the check of upend_idx != done_idx
to (upend_idx + 1) % UIO_MAXIOV == done_idx.
- Switch to use put_user() in