search for: vhost_scsi_release_cmd

Displaying 20 results from an estimated 21 matches for "vhost_scsi_release_cmd".

2019 Jul 24
0
[PATCH 07/12] vhost-scsi: convert put_page() to put_user_page*()
...at redhat.com> --- drivers/vhost/scsi.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index a9caf1bc3c3e..282565ab5e3f 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -329,11 +329,11 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd) if (tv_cmd->tvc_sgl_count) { for (i = 0; i < tv_cmd->tvc_sgl_count; i++) - put_page(sg_page(&tv_cmd->tvc_sgl[i])); + put_user_page(sg_page(&tv_cmd->tvc_sgl[i])); } if (tv_cmd->tvc_prot_sgl_count) { for (i = 0; i < tv_cmd->tvc_...
2019 Jul 24
0
[PATCH 07/12] vhost-scsi: convert put_page() to put_user_page*()
...st/scsi.c | 13 ++++++++++--- > 1 file changed, 10 insertions(+), 3 deletions(-) > > diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c > index a9caf1bc3c3e..282565ab5e3f 100644 > --- a/drivers/vhost/scsi.c > +++ b/drivers/vhost/scsi.c > @@ -329,11 +329,11 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd) > > if (tv_cmd->tvc_sgl_count) { > for (i = 0; i < tv_cmd->tvc_sgl_count; i++) > - put_page(sg_page(&tv_cmd->tvc_sgl[i])); > + put_user_page(sg_page(&tv_cmd->tvc_sgl[i])); > } > if (tv_cmd->tvc_prot_sgl_count) { &g...
2020 Sep 24
0
[PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
...uct vhost_scsi_virtqueue { > * Writers must also take dev mutex and flush under it. > */ > int inflight_idx; > + struct vhost_scsi_cmd *scsi_cmds; > + struct sbitmap scsi_tags; > + int max_cmds; > }; > > struct vhost_scsi { > @@ -324,7 +326,9 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd) > { > struct vhost_scsi_cmd *tv_cmd = container_of(se_cmd, > struct vhost_scsi_cmd, tvc_se_cmd); > - struct se_session *se_sess = tv_cmd->tvc_nexus->tvn_se_sess; > + struct vhost_scsi_virtqueue *svq = container_of(tv_cmd->tvc_vq, > + stru...
2019 Jul 24
20
[PATCH 00/12] block/bio, fs: convert put_page() to put_user_page*()
From: John Hubbard <jhubbard at nvidia.com> Hi, This is mostly Jerome's work, converting the block/bio and related areas to call put_user_page*() instead of put_page(). Because I've changed Jerome's patches, in some cases significantly, I'd like to get his feedback before we actually leave him listed as the author (he might want to disown some or all of these). I added a
2019 Jul 24
20
[PATCH 00/12] block/bio, fs: convert put_page() to put_user_page*()
From: John Hubbard <jhubbard at nvidia.com> Hi, This is mostly Jerome's work, converting the block/bio and related areas to call put_user_page*() instead of put_page(). Because I've changed Jerome's patches, in some cases significantly, I'd like to get his feedback before we actually leave him listed as the author (he might want to disown some or all of these). I added a
2018 May 15
0
[PATCH 1/2] Convert target drivers to use sbitmap
.....1fadaa39f322 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -46,7 +46,6 @@ #include <linux/virtio_scsi.h> #include <linux/llist.h> #include <linux/bitmap.h> -#include <linux/percpu_ida.h> #include "vhost.h" @@ -324,7 +323,8 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd) } vhost_scsi_put_inflight(tv_cmd->inflight); - percpu_ida_free(&se_sess->sess_tag_pool, se_cmd->map_tag); + sbitmap_queue_clear(&se_sess->sess_tag_pool, se_cmd->map_tag, + se_cmd->map_cpu); } static u32 vhost_scsi_sess_get_index(struct se...
2023 Mar 28
12
[PATCH v6 00/11] vhost: multiple worker support
The following patches were built over linux-next which contains various vhost patches in mst's tree and the vhost_task patchset in Christian Brauner's tree: git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git kernel.user_worker branch: https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/log/?h=kernel.user_worker The latter patchset handles the review comment
2018 Jun 12
8
[PATCH 0/3] Use sbitmap instead of percpu_ida
Removing the percpu_ida code nets over 400 lines of removal. It's not as spectacular as deleting an entire architecture, but it's still a worthy reduction in lines of code. Untested due to lack of hardware and not understanding how to set up a target platform. Changes from v1: - Fixed bugs pointed out by Jens in iscsit_wait_for_tag() - Abstracted out tag freeing as requested by Bart
2018 Jun 12
8
[PATCH 0/3] Use sbitmap instead of percpu_ida
Removing the percpu_ida code nets over 400 lines of removal. It's not as spectacular as deleting an entire architecture, but it's still a worthy reduction in lines of code. Untested due to lack of hardware and not understanding how to set up a target platform. Changes from v1: - Fixed bugs pointed out by Jens in iscsit_wait_for_tag() - Abstracted out tag freeing as requested by Bart
2018 May 15
6
[PATCH 0/2] Use sbitmap instead of percpu_ida
From: Matthew Wilcox <mawilcox at microsoft.com> This is a pretty rough-and-ready conversion of the target drivers from using percpu_ida to sbitmap. It compiles; I don't have a target setup, so it's completely untested. I haven't tried to do anything particularly clever here, so it's possible that, for example, the wait queue in iscsi_target_util could be more clever, like
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and tweaks were needed on top of Tiwei's code to make it run. TCP stream and pktgen does not show obvious difference compared with split ring. Changes from V2: - do not use & in checking desc_event_flags - off should be most significant bit -
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and tweaks were needed on top of Tiwei's code to make it run. TCP stream and pktgen does not show obvious difference compared with split ring. Changes from V2: - do not use & in checking desc_event_flags - off should be most significant bit -
2018 May 29
9
[RFC V5 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V5 at https://lkml.org/lkml/2018/5/22/138. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on TX PPS when doing pktgen from guest to host. No ovbious improvement on RX PPS. We can do lots of optimizations on top but for simple
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when