Displaying 20 results from an estimated 27 matches for "vhost_scsi_do_evt_work".
2020 Jun 02
0
[PATCH RFC 11/13] vhost/scsi: switch to buf APIs
...and most likely a bug */
+static void vhost_scsi_signal_noinput(struct vhost_dev *vdev,
+ struct vhost_virtqueue *vq,
+ struct vhost_buf *bufp)
+{
+ struct vhost_buf buf = *bufp;
+
+ buf.in_len = 0;
+ vhost_put_used_buf(vq, &buf);
+ vhost_signal(vdev, vq);
+}
+
+
static void
vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt)
{
@@ -450,7 +464,8 @@ vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt)
struct virtio_scsi_event *event = &evt->event;
struct virtio_scsi_event __user *eventp;
unsigned out, in;
- int head, ret;
+ struct vhost_...
2020 Jun 07
0
[PATCH RFC v5 11/13] vhost/scsi: switch to buf APIs
...and most likely a bug */
+static void vhost_scsi_signal_noinput(struct vhost_dev *vdev,
+ struct vhost_virtqueue *vq,
+ struct vhost_buf *bufp)
+{
+ struct vhost_buf buf = *bufp;
+
+ buf.in_len = 0;
+ vhost_put_used_buf(vq, &buf);
+ vhost_signal(vdev, vq);
+}
+
+
static void
vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt)
{
@@ -450,7 +464,8 @@ vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt)
struct virtio_scsi_event *event = &evt->event;
struct virtio_scsi_event __user *eventp;
unsigned out, in;
- int head, ret;
+ struct vhost_...
2020 Nov 02
1
[PATCH 07/17] vhost scsi: support delayed IO vq creation
On 2020/10/30 ??4:47, Michael S. Tsirkin wrote:
> On Tue, Oct 27, 2020 at 12:47:34AM -0500, Mike Christie wrote:
>> On 10/25/20 10:51 PM, Jason Wang wrote:
>>> On 2020/10/22 ??8:34, Mike Christie wrote:
>>>> Each vhost-scsi device will need a evt and ctl queue, but the number
>>>> of IO queues depends on whatever the user has configured in userspace.
2020 Jun 07
17
[PATCH RFC v5 00/13] vhost: ring format independence
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to IOV later.
The point is that we have a tight loop that fetches
descriptors, which is good for cache utilization.
This will
2017 May 22
1
[PATCH] vhost: Coalesce vq_err formats, pr_fmt misuse, add missing newlines
...ite");
+ vq_err(vq, "Failed num_buffers write\n");
vhost_discard_vq_desc(vq, headcount);
goto out;
}
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index fd6c8b66f06f..c0d3746d5ff3 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -473,7 +473,7 @@ vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt)
if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) {
vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n",
- vq->iov[out].iov_len);
+ vq->iov[out].iov_len);
vs->vs_events_missed = true;...
2017 May 22
1
[PATCH] vhost: Coalesce vq_err formats, pr_fmt misuse, add missing newlines
...ite");
+ vq_err(vq, "Failed num_buffers write\n");
vhost_discard_vq_desc(vq, headcount);
goto out;
}
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index fd6c8b66f06f..c0d3746d5ff3 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -473,7 +473,7 @@ vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt)
if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) {
vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n",
- vq->iov[out].iov_len);
+ vq->iov[out].iov_len);
vs->vs_events_missed = true;...
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues which were described
at [1]. In this version we try to address the performance regression
saw by V2. The root cause is packed virtqueue need more times of
userspace memory accesssing which turns out to be very
expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq
metadata through kernel virtual address"), such overhead cold be
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues which were described
at [1]. In this version we try to address the performance regression
saw by V2. The root cause is packed virtqueue need more times of
userspace memory accesssing which turns out to be very
expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq
metadata through kernel virtual address"), such overhead cold be
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with pmd
implement by Jens at
http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change
was needed for pmd codes to kick virtqueue since it assumes a busy
polling backend.
Test were done between localhost and guest. Testpmd (rxonly) in guest
reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps.
Notes: The event
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with pmd
implement by Jens at
http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change
was needed for pmd codes to kick virtqueue since it assumes a busy
polling backend.
Test were done between localhost and guest. Testpmd (rxonly) in guest
reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps.
Notes: The event
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and
tweaks were needed on top of Tiwei's code to make it run for event
index.
Pktgen reports about 20% improvement on PPS (event index is off). More
testing is ongoing.
Notes for tester:
- Start from this version, vhost need qemu co-operation to work
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and
tweaks were needed on top of Tiwei's code to make it run for event
index.
Pktgen reports about 20% improvement on PPS (event index is off). More
testing is ongoing.
Notes for tester:
- Start from this version, vhost need qemu co-operation to work
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and
tweaks were needed on top of Tiwei's code to make it run. TCP stream
and pktgen does not show obvious difference compared with split ring.
Changes from V2:
- do not use & in checking desc_event_flags
- off should be most significant bit
-
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and
tweaks were needed on top of Tiwei's code to make it run. TCP stream
and pktgen does not show obvious difference compared with split ring.
Changes from V2:
- do not use & in checking desc_event_flags
- off should be most significant bit
-
2020 Jun 10
18
[PATCH RFC v7 00/14] vhost: ring format independence
This intentionally leaves "fixup" changes separate - hopefully
that is enough to fix vhost-net crashes reported here,
but it helps me keep track of what changed.
I will naturally squash them later when we are done.
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that
2020 Jun 10
18
[PATCH RFC v7 00/14] vhost: ring format independence
This intentionally leaves "fixup" changes separate - hopefully
that is enough to fix vhost-net crashes reported here,
but it helps me keep track of what changed.
I will naturally squash them later when we are done.
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that
2018 May 29
9
[RFC V5 PATCH 0/8] Packed ring layout for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V5 at https://lkml.org/lkml/2018/5/22/138. Some fixups and
tweaks were needed on top of Tiwei's code to make it run for event
index.
Pktgen reports about 20% improvement on TX PPS when doing pktgen from
guest to host. No ovbious improvement on RX PPS. We can do lots of
optimizations on top but for simple
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test
2020 Jun 08
14
[PATCH RFC v6 00/11] vhost: ring format independence
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to IOV later.
The point is that we have a tight loop that fetches
descriptors, which is good for cache utilization.
This will