search for: vhost_vq_get_backend

Displaying 20 results from an estimated 20 matches for "vhost_vq_get_backend".

2020 Apr 30
2
[PATCH] vhost: vsock: don't send pkt when vq is not started
...> @@ -252,6 +253,13 @@ vhost_transport_send_pkt(struct virtio_vsock_pkt > > *pkt) > > > return -ENODEV; > > > } > > > > > > +vq = &vsock->vqs[VSOCK_VQ_RX]; > > > +if (!vq->private_data) { > > > > I think is better to use vhost_vq_get_backend(): > > > > if (!vhost_vq_get_backend(&vsock->vqs[VSOCK_VQ_RX])) { > > ... > > > > This function should be called with 'vq->mutex' acquired as explained in > > the comment, but here we can avoid that, because we are not using the vq, > > so...
2020 Apr 30
2
[PATCH] vhost: vsock: don't send pkt when vq is not started
...> @@ -252,6 +253,13 @@ vhost_transport_send_pkt(struct virtio_vsock_pkt > > *pkt) > > > return -ENODEV; > > > } > > > > > > +vq = &vsock->vqs[VSOCK_VQ_RX]; > > > +if (!vq->private_data) { > > > > I think is better to use vhost_vq_get_backend(): > > > > if (!vhost_vq_get_backend(&vsock->vqs[VSOCK_VQ_RX])) { > > ... > > > > This function should be called with 'vq->mutex' acquired as explained in > > the comment, but here we can avoid that, because we are not using the vq, > > so...
2020 Apr 30
0
[PATCH] vhost: vsock: don't send pkt when vq is not started
...rt_send_pkt(struct virtio_vsock_pkt > > > *pkt) > > > > return -ENODEV; > > > > } > > > > > > > > +vq = &vsock->vqs[VSOCK_VQ_RX]; > > > > +if (!vq->private_data) { > > > > > > I think is better to use vhost_vq_get_backend(): > > > > > > if (!vhost_vq_get_backend(&vsock->vqs[VSOCK_VQ_RX])) { > > > ... > > > > > > This function should be called with 'vq->mutex' acquired as explained in > > > the comment, but here we can avoid that, because we are...
2020 Apr 30
0
[PATCH] vhost: vsock: don't send pkt when vq is not started
...> + struct vhost_virtqueue *vq; > > rcu_read_lock(); > > @@ -252,6 +253,13 @@ vhost_transport_send_pkt(struct virtio_vsock_pkt *pkt) > return -ENODEV; > } > > + vq = &vsock->vqs[VSOCK_VQ_RX]; > + if (!vq->private_data) { I think is better to use vhost_vq_get_backend(): if (!vhost_vq_get_backend(&vsock->vqs[VSOCK_VQ_RX])) { ... This function should be called with 'vq->mutex' acquired as explained in the comment, but here we can avoid that, because we are not using the vq, so it is safe, because in vhost_transport_do_send_pkt() we check it...
2023 Apr 10
1
[PATCH v6 11/11] vhost: allow userspace to create workers
...host_scsi_tmf_resp_work, because it can only be called after the > backend is set. > > 2. If the backed has been set and vhost_scsi_tmf_resp_work has > run or is running, then we when we would not have called __vhost_vq_attach_worker > from vhost_vq_attach_worker because it would see vhost_vq_get_backend > returning a non-NULL value. > > If vhost_scsi later sets the backend to NULL, then vhost_scsi_clear_endpoint > will have made sure the flush has completed when the clear function returns. > It does that with the device mutex so when we run __vhost_vq_attach_worker > It will only...
2020 Jun 02
0
[PATCH RFC 11/13] vhost/scsi: switch to buf APIs
...t vhost_scsi_evt *evt) { @@ -450,7 +464,8 @@ vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt) struct virtio_scsi_event *event = &evt->event; struct virtio_scsi_event __user *eventp; unsigned out, in; - int head, ret; + struct vhost_buf buf; + int ret; if (!vhost_vq_get_backend(vq)) { vs->vs_events_missed = true; @@ -459,14 +474,14 @@ vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt) again: vhost_disable_notify(&vs->dev, vq); - head = vhost_get_vq_desc(vq, vq->iov, - ARRAY_SIZE(vq->iov), &out, &in, - NULL, NULL);...
2020 Jun 07
0
[PATCH RFC v5 11/13] vhost/scsi: switch to buf APIs
...t vhost_scsi_evt *evt) { @@ -450,7 +464,8 @@ vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt) struct virtio_scsi_event *event = &evt->event; struct virtio_scsi_event __user *eventp; unsigned out, in; - int head, ret; + struct vhost_buf buf; + int ret; if (!vhost_vq_get_backend(vq)) { vs->vs_events_missed = true; @@ -459,14 +474,14 @@ vhost_scsi_do_evt_work(struct vhost_scsi *vs, struct vhost_scsi_evt *evt) again: vhost_disable_notify(&vs->dev, vq); - head = vhost_get_vq_desc(vq, vq->iov, - ARRAY_SIZE(vq->iov), &out, &in, - NULL, NULL);...
2023 Mar 28
1
[PATCH v6 11/11] vhost: allow userspace to create workers
...g_worker *info) +{ + unsigned long index = info->worker_id; + struct vhost_dev *dev = vq->dev; + struct vhost_worker *worker; + + if (!dev->use_worker) + return -EINVAL; + + /* + * We don't support setting a worker on an active vq to make flushing + * and removal simple. + */ + if (vhost_vq_get_backend(vq)) + return -EBUSY; + + worker = xa_find(&dev->worker_xa, &index, UINT_MAX, XA_PRESENT); + if (!worker || worker->id != info->worker_id) + return -ENODEV; + + __vhost_vq_attach_worker(vq, worker); + return 0; +} + +/* Caller must have device mutex */ +static int vhost_new_worke...
2020 Sep 24
0
[PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
...(&vq->mutex); > @@ -1476,7 +1565,22 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > vhost_scsi_flush(vs); > kfree(vs->vs_tpg); > vs->vs_tpg = vs_tpg; > + goto out; > > +destroy_vq_cmds: > + for (i--; i >= VHOST_SCSI_VQ_IO; i--) { > + if (!vhost_vq_get_backend(&vs->vqs[i].vq)) > + vhost_scsi_destroy_vq_cmds(&vs->vqs[i].vq); > + } > +undepend: > + for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) { > + tpg = vs_tpg[i]; > + if (tpg) { > + tpg->tv_tpg_vhost_count--; > + target_undepend_item(&tpg->se_tpg.t...
2020 Jun 03
1
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
...l_buf(tvq, buf, tvq->iov, ARRAY_SIZE(tvq->iov), > + out_num, in_num, NULL, NULL); > > - if (r == tvq->num && tvq->busyloop_timeout) { > + if (!r && tvq->busyloop_timeout) { > /* Flush batched packets first */ > if (!vhost_sock_zcopy(vhost_vq_get_backend(tvq))) > vhost_tx_batch(net, tnvq, > @@ -577,8 +590,8 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > > vhost_net_busy_poll(net, rvq, tvq, busyloop_intr, false); > > - r = vhost_get_vq_desc(tvq, tvq->iov, ARRAY_SIZE(tvq->iov), > -...
2020 Jun 07
17
[PATCH RFC v5 00/13] vhost: ring format independence
This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to IOV later. The point is that we have a tight loop that fetches descriptors, which is good for cache utilization. This will
2020 Jun 02
0
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
..., NULL); + int r = vhost_get_avail_buf(tvq, buf, tvq->iov, ARRAY_SIZE(tvq->iov), + out_num, in_num, NULL, NULL); - if (r == tvq->num && tvq->busyloop_timeout) { + if (!r && tvq->busyloop_timeout) { /* Flush batched packets first */ if (!vhost_sock_zcopy(vhost_vq_get_backend(tvq))) vhost_tx_batch(net, tnvq, @@ -577,8 +590,8 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net, vhost_net_busy_poll(net, rvq, tvq, busyloop_intr, false); - r = vhost_get_vq_desc(tvq, tvq->iov, ARRAY_SIZE(tvq->iov), - out_num, in_num, NULL, NULL); + r = vh...
2020 Jun 10
18
[PATCH RFC v7 00/14] vhost: ring format independence
This intentionally leaves "fixup" changes separate - hopefully that is enough to fix vhost-net crashes reported here, but it helps me keep track of what changed. I will naturally squash them later when we are done. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that
2020 Jun 10
18
[PATCH RFC v7 00/14] vhost: ring format independence
This intentionally leaves "fixup" changes separate - hopefully that is enough to fix vhost-net crashes reported here, but it helps me keep track of what changed. I will naturally squash them later when we are done. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that
2020 Jun 02
21
[PATCH RFC 00/13] vhost: format independence
We let the specifics of the ring format seep through to vhost API callers - mostly because there was only one format so it was hard to imagine what an independent API would look like. Now that there's an alternative in form of the packed ring, it's easier to see the issues, and fixing them is perhaps the cleanest way to add support for more formats. This patchset does this by indtroducing
2020 Jun 08
14
[PATCH RFC v6 00/11] vhost: ring format independence
This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to IOV later. The point is that we have a tight loop that fetches descriptors, which is good for cache utilization. This will
2023 Mar 21
8
[PATCH v2 0/7] vhost-scsi: Fix crashes and management op hangs
The following patches were made over Linus tree. The patches fix 3 issues: 1. If a user performs LIO LUN unmapping before the endpoint has been cleared then we can end up trying to free a bogus tmf struct if the TMF is still exucuting when we do the unmap. 2. If vhost_scsi_setup_vq_cmds fails we can leave the tpg->vhost_scsi pointer set and we can end up trying to access a freed struct. 3.
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try to use in production please. Posting to expedite debugging. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try to use in production please. Posting to expedite debugging. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to
2023 Mar 28
12
[PATCH v6 00/11] vhost: multiple worker support
The following patches were built over linux-next which contains various vhost patches in mst's tree and the vhost_task patchset in Christian Brauner's tree: git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git kernel.user_worker branch: https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/log/?h=kernel.user_worker The latter patchset handles the review comment