search for: out_iter

Displaying 20 results from an estimated 20 matches for "out_iter".

2017 Jan 12
1
[patch] vhost/scsi: silence uninitialized variable warning
...n Carpenter <dan.carpenter at oracle.com> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 253310c..b98dac1 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -843,7 +843,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) struct iov_iter out_iter, in_iter, prot_iter, data_iter; u64 tag; u32 exp_data_len, data_direction; - unsigned out, in; + unsigned int out = 0, in = 0; int head, ret, prot_bytes; size_t req_size, rsp_size = sizeof(struct virtio_scsi_cmd_resp); size_t out_size, in_size;
2017 Jan 12
1
[patch] vhost/scsi: silence uninitialized variable warning
...n Carpenter <dan.carpenter at oracle.com> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 253310c..b98dac1 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -843,7 +843,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) struct iov_iter out_iter, in_iter, prot_iter, data_iter; u64 tag; u32 exp_data_len, data_direction; - unsigned out, in; + unsigned int out = 0, in = 0; int head, ret, prot_bytes; size_t req_size, rsp_size = sizeof(struct virtio_scsi_cmd_resp); size_t out_size, in_size;
2016 Jun 17
2
[RFC PATCH] vhost, mm: make sure that oom_reaper doesn't reap memory read by vhost
...rtions(+), 10 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 0e6fd556c982..2c8dc0b9a21f 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -932,7 +932,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) */ iov_iter_init(&out_iter, WRITE, vq->iov, out, out_size); - ret = copy_from_iter(req, req_size, &out_iter); + ret = copy_from_iter_mm(vq->dev->mm, req, req_size, &out_iter); if (unlikely(ret != req_size)) { vq_err(vq, "Faulted on copy_from_iter\n"); vhost_scsi_send_bad_target(vs, v...
2016 Jun 17
2
[RFC PATCH] vhost, mm: make sure that oom_reaper doesn't reap memory read by vhost
...rtions(+), 10 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 0e6fd556c982..2c8dc0b9a21f 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -932,7 +932,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) */ iov_iter_init(&out_iter, WRITE, vq->iov, out, out_size); - ret = copy_from_iter(req, req_size, &out_iter); + ret = copy_from_iter_mm(vq->dev->mm, req, req_size, &out_iter); if (unlikely(ret != req_size)) { vq_err(vq, "Faulted on copy_from_iter\n"); vhost_scsi_send_bad_target(vs, v...
2016 Jun 18
0
[RFC PATCH] vhost, mm: make sure that oom_reaper doesn't reap memory read by vhost
...ff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c > index 0e6fd556c982..2c8dc0b9a21f 100644 > --- a/drivers/vhost/scsi.c > +++ b/drivers/vhost/scsi.c > @@ -932,7 +932,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) > */ > iov_iter_init(&out_iter, WRITE, vq->iov, out, out_size); > > - ret = copy_from_iter(req, req_size, &out_iter); > + ret = copy_from_iter_mm(vq->dev->mm, req, req_size, &out_iter); > if (unlikely(ret != req_size)) { > vq_err(vq, "Faulted on copy_from_iter\n"); > vh...
2018 Dec 13
0
[PATCH] vhost: correct the related warning message
...t/scsi.c > index 50dffe8..b459b69 100644 > --- a/drivers/vhost/scsi.c > +++ b/drivers/vhost/scsi.c > @@ -889,7 +889,7 @@ static void vhost_scsi_submission_work(struct work_struct *work) > > if (unlikely(!copy_from_iter_full(vc->req, vc->req_size, > &vc->out_iter))) { > - vq_err(vq, "Faulted on copy_from_iter\n"); > + vq_err(vq, "Faulted on copy_from_iter_full\n"); > } else if (unlikely(*vc->lunp != 1)) { > /* virtio-scsi spec requires byte 0 of the lun to be 1 */ > vq_err(vq, "Illegal virtio-scsi lun: %u...
2023 Feb 23
1
[PATCH 3/5] vhost-scsi: Remove vhost_scsi_mutex from port link/unlink
...rivers/vhost/scsi.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 502d64b53d9c..9e154e568438 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -232,7 +232,7 @@ struct vhost_scsi_ctx { struct iov_iter out_iter; }; -/* Global spinlock to protect vhost_scsi TPG list for vhost IOCTL access */ +/* Global mutex to protect vhost_scsi TPG list for vhost IOCTL access */ static DEFINE_MUTEX(vhost_scsi_mutex); static LIST_HEAD(vhost_scsi_list); @@ -2038,17 +2038,12 @@ static int vhost_scsi_port_link(struct...
2023 Feb 23
5
[PATCH 0/5] vhost-scsi: Fix management operation hangs
The following patches were made over Linus tree and also apply over mst tree's vhost branch. The patches fix an issue where management operations like LUN mapping/unmapping and device addition hang for 30 seconds or up to N minutes depending on the device. The problem is that we use a global mutex to protect the list of tpgs but we hold that mutex during those management operations. So if you
2023 Mar 21
8
[PATCH v2 0/7] vhost-scsi: Fix crashes and management op hangs
The following patches were made over Linus tree. The patches fix 3 issues: 1. If a user performs LIO LUN unmapping before the endpoint has been cleared then we can end up trying to free a bogus tmf struct if the TMF is still exucuting when we do the unmap. 2. If vhost_scsi_setup_vq_cmds fails we can leave the tpg->vhost_scsi pointer set and we can end up trying to access a freed struct. 3.
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and tweaks were needed on top of Tiwei's code to make it run. TCP stream and pktgen does not show obvious difference compared with split ring. Changes from V2: - do not use & in checking desc_event_flags - off should be most significant bit -
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and tweaks were needed on top of Tiwei's code to make it run. TCP stream and pktgen does not show obvious difference compared with split ring. Changes from V2: - do not use & in checking desc_event_flags - off should be most significant bit -
2018 May 29
9
[RFC V5 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V5 at https://lkml.org/lkml/2018/5/22/138. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on TX PPS when doing pktgen from guest to host. No ovbious improvement on RX PPS. We can do lots of optimizations on top but for simple
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when