Displaying 20 results from an estimated 4000 matches similar to: "[PATCH V2 0/4] Prevent vhost kthread from hogging CPU"
2019 May 16
6
[PATCH net 0/4] Prevent vhost kthread from hogging CPU
Hi:
This series try to prvernt a guest triggerable CPU hogging through
vhost kthread. This is done by introducing and checking the weight
after each requrest. The patch has been tested with reproducer of
vsock and virtio-net. Only compile test is done for vhost-scsi.
Please review.
This addresses CVE-2019-3900.
Jason Wang (4):
vhost: introduce vhost_exceeds_weight()
vhost_net: fix possible
2019 Apr 25
2
[PATCH net] vhost_net: fix possible infinite loop
When the rx buffer is too small for a packet, we will discard the vq
descriptor and retry it for the next packet:
while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk,
&busyloop_intr))) {
...
/* On overrun, truncate and discard */
if (unlikely(headcount > UIO_MAXIOV)) {
iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
err = sock->ops->recvmsg(sock,
2019 Apr 25
2
[PATCH net] vhost_net: fix possible infinite loop
When the rx buffer is too small for a packet, we will discard the vq
descriptor and retry it for the next packet:
while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk,
&busyloop_intr))) {
...
/* On overrun, truncate and discard */
if (unlikely(headcount > UIO_MAXIOV)) {
iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
err = sock->ops->recvmsg(sock,
2019 Apr 26
2
[PATCH net] vhost_net: fix possible infinite loop
On 2019/4/26 ??1:52, Michael S. Tsirkin wrote:
> On Thu, Apr 25, 2019 at 03:33:19AM -0400, Jason Wang wrote:
>> When the rx buffer is too small for a packet, we will discard the vq
>> descriptor and retry it for the next packet:
>>
>> while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk,
>> &busyloop_intr))) {
>> ...
>> /* On
2019 Apr 26
2
[PATCH net] vhost_net: fix possible infinite loop
On 2019/4/26 ??1:52, Michael S. Tsirkin wrote:
> On Thu, Apr 25, 2019 at 03:33:19AM -0400, Jason Wang wrote:
>> When the rx buffer is too small for a packet, we will discard the vq
>> descriptor and retry it for the next packet:
>>
>> while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk,
>> &busyloop_intr))) {
>> ...
>> /* On
2019 May 12
2
[PATCH net] vhost_net: fix possible infinite loop
On Sun, May 05, 2019 at 12:20:24PM +0800, Jason Wang wrote:
>
> On 2019/4/26 ??3:35, Jason Wang wrote:
> >
> > On 2019/4/26 ??1:52, Michael S. Tsirkin wrote:
> > > On Thu, Apr 25, 2019 at 03:33:19AM -0400, Jason Wang wrote:
> > > > When the rx buffer is too small for a packet, we will discard the vq
> > > > descriptor and retry it for the next
2019 May 12
2
[PATCH net] vhost_net: fix possible infinite loop
On Sun, May 05, 2019 at 12:20:24PM +0800, Jason Wang wrote:
>
> On 2019/4/26 ??3:35, Jason Wang wrote:
> >
> > On 2019/4/26 ??1:52, Michael S. Tsirkin wrote:
> > > On Thu, Apr 25, 2019 at 03:33:19AM -0400, Jason Wang wrote:
> > > > When the rx buffer is too small for a packet, we will discard the vq
> > > > descriptor and retry it for the next
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2020 Jun 07
17
[PATCH RFC v5 00/13] vhost: ring format independence
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to IOV later.
The point is that we have a tight loop that fetches
descriptors, which is good for cache utilization.
This will
2020 Jun 02
21
[PATCH RFC 00/13] vhost: format independence
We let the specifics of the ring format seep through to vhost API
callers - mostly because there was only one format so it was
hard to imagine what an independent API would look like.
Now that there's an alternative in form of the packed ring,
it's easier to see the issues, and fixing them is perhaps
the cleanest way to add support for more formats.
This patchset does this by indtroducing
2020 Jun 03
1
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
On 2020/6/2 ??9:06, Michael S. Tsirkin wrote:
> Convert vhost net to use the new format-agnostic API.
> In particular, don't poke at vq internals such as the
> heads array.
>
> Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
> ---
> drivers/vhost/net.c | 153 +++++++++++++++++++++++---------------------
> 1 file changed, 81 insertions(+), 72 deletions(-)
2020 Jun 08
14
[PATCH RFC v6 00/11] vhost: ring format independence
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to IOV later.
The point is that we have a tight loop that fetches
descriptors, which is good for cache utilization.
This will
2018 May 21
20
[RFC PATCH net-next 00/12] XDP batching for TUN/vhost_net
Hi all:
We do not support XDP batching for TUN since it can only receive one
packet a time from vhost_net. This series tries to remove this
limitation by:
- introduce a TUN specific msg_control that can hold a pointer to an
array of XDP buffs
- try copy and build XDP buff in vhost_net
- store XDP buffs in an array and submit them once for every N packets
from vhost_net
- since TUN can only
2020 Jun 10
18
[PATCH RFC v7 00/14] vhost: ring format independence
This intentionally leaves "fixup" changes separate - hopefully
that is enough to fix vhost-net crashes reported here,
but it helps me keep track of what changed.
I will naturally squash them later when we are done.
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that
2020 Jun 10
18
[PATCH RFC v7 00/14] vhost: ring format independence
This intentionally leaves "fixup" changes separate - hopefully
that is enough to fix vhost-net crashes reported here,
but it helps me keep track of what changed.
I will naturally squash them later when we are done.
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues which were described
at [1]. In this version we try to address the performance regression
saw by V2. The root cause is packed virtqueue need more times of
userspace memory accesssing which turns out to be very
expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq
metadata through kernel virtual address"), such overhead cold be
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues which were described
at [1]. In this version we try to address the performance regression
saw by V2. The root cause is packed virtqueue need more times of
userspace memory accesssing which turns out to be very
expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq
metadata through kernel virtual address"), such overhead cold be
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
On Thu, Sep 06, 2018 at 12:05:26PM +0800, Jason Wang wrote:
> This patch implements XDP batching for vhost_net. The idea is first to
> try to do userspace copy and build XDP buff directly in vhost. Instead
> of submitting the packet immediately, vhost_net will batch them in an
> array and submit every 64 (VHOST_NET_BATCH) packets to the under layer
> sockets through msg_control of
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
On Thu, Sep 06, 2018 at 12:05:26PM +0800, Jason Wang wrote:
> This patch implements XDP batching for vhost_net. The idea is first to
> try to do userspace copy and build XDP buff directly in vhost. Instead
> of submitting the packet immediately, vhost_net will batch them in an
> array and submit every 64 (VHOST_NET_BATCH) packets to the under layer
> sockets through msg_control of