Displaying 20 results from an estimated 32 matches for "no_more_repli".
Did you mean:
no_more_replies
2020 Jun 08
2
[PATCH RFC v5 12/13] vhost/vsock: switch to the buf API
...tio_vsock_pkt *pkt;
> - int head, pkts = 0, total_len = 0;
> + int ret, pkts = 0, total_len = 0;
> + struct vhost_buf buf;
> unsigned int out, in;
> bool added = false;
>
> @@ -461,12 +465,13 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
> goto no_more_replies;
> }
>
> - head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
> - &out, &in, NULL, NULL);
> - if (head < 0)
> + ret = vhost_get_avail_buf(vq, &buf,
> + vq->iov, ARRAY_SIZE(vq->iov),
> + &out, &in, NULL, NU...
2020 Jun 08
2
[PATCH RFC v5 12/13] vhost/vsock: switch to the buf API
...tio_vsock_pkt *pkt;
> - int head, pkts = 0, total_len = 0;
> + int ret, pkts = 0, total_len = 0;
> + struct vhost_buf buf;
> unsigned int out, in;
> bool added = false;
>
> @@ -461,12 +465,13 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
> goto no_more_replies;
> }
>
> - head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
> - &out, &in, NULL, NULL);
> - if (head < 0)
> + ret = vhost_get_avail_buf(vq, &buf,
> + vq->iov, ARRAY_SIZE(vq->iov),
> + &out, &in, NULL, NU...
2019 May 16
0
[PATCH net 3/4] vhost: vsock: add weight support
...ruct vhost_work *work)
else
virtio_transport_free_pkt(pkt);
- vhost_add_used(vq, head, sizeof(pkt->hdr) + len);
+ len += sizeof(pkt->hdr);
+ vhost_add_used(vq, head, len);
+ total_len += len;
added = true;
- }
+ } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len)));
no_more_replies:
if (added)
--
1.8.3.1
2020 Jun 02
0
[PATCH RFC 12/13] vhost/vsock: switch to the buf API
...vhost_vsock,
dev);
struct virtio_vsock_pkt *pkt;
- int head, pkts = 0, total_len = 0;
+ int ret, pkts = 0, total_len = 0;
+ struct vhost_buf buf;
unsigned int out, in;
bool added = false;
@@ -461,12 +465,13 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
goto no_more_replies;
}
- head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
- &out, &in, NULL, NULL);
- if (head < 0)
+ ret = vhost_get_avail_buf(vq, &buf,
+ vq->iov, ARRAY_SIZE(vq->iov),
+ &out, &in, NULL, NULL);
+ if (ret < 0)
break;
-...
2020 Jun 07
0
[PATCH RFC v5 12/13] vhost/vsock: switch to the buf API
...vhost_vsock,
dev);
struct virtio_vsock_pkt *pkt;
- int head, pkts = 0, total_len = 0;
+ int ret, pkts = 0, total_len = 0;
+ struct vhost_buf buf;
unsigned int out, in;
bool added = false;
@@ -461,12 +465,13 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
goto no_more_replies;
}
- head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
- &out, &in, NULL, NULL);
- if (head < 0)
+ ret = vhost_get_avail_buf(vq, &buf,
+ vq->iov, ARRAY_SIZE(vq->iov),
+ &out, &in, NULL, NULL);
+ if (ret < 0)
break;
-...
2020 Jun 08
0
[PATCH RFC v6 10/11] vhost/vsock: switch to the buf API
...vhost_vsock,
dev);
struct virtio_vsock_pkt *pkt;
- int head, pkts = 0, total_len = 0;
+ int ret, pkts = 0, total_len = 0;
+ struct vhost_buf buf;
unsigned int out, in;
bool added = false;
@@ -461,12 +465,13 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
goto no_more_replies;
}
- head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
- &out, &in, NULL, NULL);
- if (head < 0)
+ ret = vhost_get_avail_buf(vq, &buf,
+ vq->iov, ARRAY_SIZE(vq->iov),
+ &out, &in, NULL, NULL);
+ if (ret < 0)
break;
-...
2020 Jun 08
0
[PATCH RFC v5 12/13] vhost/vsock: switch to the buf API
..., pkts = 0, total_len = 0;
> > + int ret, pkts = 0, total_len = 0;
> > + struct vhost_buf buf;
> > unsigned int out, in;
> > bool added = false;
> >
> > @@ -461,12 +465,13 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
> > goto no_more_replies;
> > }
> >
> > - head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
> > - &out, &in, NULL, NULL);
> > - if (head < 0)
> > + ret = vhost_get_avail_buf(vq, &buf,
> > + vq->iov, ARRAY_SIZE(vq->iov),
>...
2019 May 16
6
[PATCH net 0/4] Prevent vhost kthread from hogging CPU
Hi:
This series try to prvernt a guest triggerable CPU hogging through
vhost kthread. This is done by introducing and checking the weight
after each requrest. The patch has been tested with reproducer of
vsock and virtio-net. Only compile test is done for vhost-scsi.
Please review.
This addresses CVE-2019-3900.
Jason Wang (4):
vhost: introduce vhost_exceeds_weight()
vhost_net: fix possible
2019 May 17
9
[PATCH V2 0/4] Prevent vhost kthread from hogging CPU
Hi:
This series try to prevent a guest triggerable CPU hogging through
vhost kthread. This is done by introducing and checking the weight
after each requrest. The patch has been tested with reproducer of
vsock and virtio-net. Only compile test is done for vhost-scsi.
Please review.
This addresses CVE-2019-3900.
Changs from V1:
- fix user-ater-free in vosck patch
Jason Wang (4):
vhost:
2019 May 17
9
[PATCH V2 0/4] Prevent vhost kthread from hogging CPU
Hi:
This series try to prevent a guest triggerable CPU hogging through
vhost kthread. This is done by introducing and checking the weight
after each requrest. The patch has been tested with reproducer of
vsock and virtio-net. Only compile test is done for vhost-scsi.
Please review.
This addresses CVE-2019-3900.
Changs from V1:
- fix user-ater-free in vosck patch
Jason Wang (4):
vhost:
2020 Jun 07
17
[PATCH RFC v5 00/13] vhost: ring format independence
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to IOV later.
The point is that we have a tight loop that fetches
descriptors, which is good for cache utilization.
This will
2016 Jul 28
6
[RFC v6 0/6] Add virtio transport for AF_VSOCK
This series is based on v4.7.
This RFC is the implementation for the new VIRTIO Socket device. It is
developed in parallel with the VIRTIO device specification and proves the
design. Once the specification has been accepted I will send a non-RFC version
of this patch series.
v6:
* Add VHOST_VSOCK_SET_RUNNING ioctl to start/stop vhost cleanly
* Add graceful shutdown to avoid port reuse while
2016 Jul 28
6
[RFC v6 0/6] Add virtio transport for AF_VSOCK
This series is based on v4.7.
This RFC is the implementation for the new VIRTIO Socket device. It is
developed in parallel with the VIRTIO device specification and proves the
design. Once the specification has been accepted I will send a non-RFC version
of this patch series.
v6:
* Add VHOST_VSOCK_SET_RUNNING ioctl to start/stop vhost cleanly
* Add graceful shutdown to avoid port reuse while
2020 Jun 08
14
[PATCH RFC v6 00/11] vhost: ring format independence
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to IOV later.
The point is that we have a tight loop that fetches
descriptors, which is good for cache utilization.
This will
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with pmd
implement by Jens at
http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change
was needed for pmd codes to kick virtqueue since it assumes a busy
polling backend.
Test were done between localhost and guest. Testpmd (rxonly) in guest
reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps.
Notes: The event
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with pmd
implement by Jens at
http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change
was needed for pmd codes to kick virtqueue since it assumes a busy
polling backend.
Test were done between localhost and guest. Testpmd (rxonly) in guest
reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps.
Notes: The event
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and
tweaks were needed on top of Tiwei's code to make it run for event
index.
Pktgen reports about 20% improvement on PPS (event index is off). More
testing is ongoing.
Notes for tester:
- Start from this version, vhost need qemu co-operation to work
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and
tweaks were needed on top of Tiwei's code to make it run for event
index.
Pktgen reports about 20% improvement on PPS (event index is off). More
testing is ongoing.
Notes for tester:
- Start from this version, vhost need qemu co-operation to work
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and
tweaks were needed on top of Tiwei's code to make it run. TCP stream
and pktgen does not show obvious difference compared with split ring.
Changes from V2:
- do not use & in checking desc_event_flags
- off should be most significant bit
-
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and
tweaks were needed on top of Tiwei's code to make it run. TCP stream
and pktgen does not show obvious difference compared with split ring.
Changes from V2:
- do not use & in checking desc_event_flags
- off should be most significant bit
-