search for: viommu_send_req_sync

Displaying 20 results from an estimated 68 matches for "viommu_send_req_sync".

2017 Oct 09
0
[virtio-dev] [RFC] virtio-iommu version 0.4
...+ + *req = (struct virtio_iommu_req_attach) { + .head.type = VIRTIO_IOMMU_T_ATTACH, + .address_space = cpu_to_le32(vdomain->id), + }; + for (i = 0; i < fwspec->num_ids; i++) { - req.device = cpu_to_le32(fwspec->ids[i]); + req->device = cpu_to_le32(fwspec->ids[i]); - ret = viommu_send_req_sync(vdomain->viommu, &req); + ret = viommu_send_req_sync(vdomain->viommu, req); if (ret) break; } + kfree(req); + vdomain->attached++; vdev->vdomain = vdomain; @@ -550,13 +558,7 @@ static int viommu_map(struct iommu_domain *domain, unsigned long iova, { int ret; str...
2023 May 15
3
[PATCH v2 0/2] iommu/virtio: Fixes
One fix reported by Akihiko, and another found while going over the driver. Jean-Philippe Brucker (2): iommu/virtio: Detach domain on endpoint release iommu/virtio: Return size mapped for a detached domain drivers/iommu/virtio-iommu.c | 57 ++++++++++++++++++++++++++---------- 1 file changed, 41 insertions(+), 16 deletions(-) -- 2.40.0
2023 Apr 14
2
[PATCH] iommu/virtio: Detach domain on endpoint release
...u_fwspec_get(vdev->dev); + + if (!vdomain) + return; + + req = (struct virtio_iommu_req_detach) { + .head.type = VIRTIO_IOMMU_T_DETACH, + .domain = cpu_to_le32(vdomain->id), + }; + + for (i = 0; i < fwspec->num_ids; i++) { + req.endpoint = cpu_to_le32(fwspec->ids[i]); + WARN_ON(viommu_send_req_sync(vdev->viommu, &req, sizeof(req))); + } + vdev->vdomain = NULL; +} + static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) @@ -990,6 +1012,7 @@ static void viommu_...
2023 Apr 14
2
[PATCH] iommu/virtio: Detach domain on endpoint release
...u_fwspec_get(vdev->dev); + + if (!vdomain) + return; + + req = (struct virtio_iommu_req_detach) { + .head.type = VIRTIO_IOMMU_T_DETACH, + .domain = cpu_to_le32(vdomain->id), + }; + + for (i = 0; i < fwspec->num_ids; i++) { + req.endpoint = cpu_to_le32(fwspec->ids[i]); + WARN_ON(viommu_send_req_sync(vdev->viommu, &req, sizeof(req))); + } + vdev->vdomain = NULL; +} + static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) @@ -990,6 +1012,7 @@ static void viommu_...
2023 May 10
1
[PATCH] iommu/virtio: Detach domain on endpoint release
...+ return; > + > + req = (struct virtio_iommu_req_detach) { > + .head.type = VIRTIO_IOMMU_T_DETACH, > + .domain = cpu_to_le32(vdomain->id), > + }; > + > + for (i = 0; i < fwspec->num_ids; i++) { > + req.endpoint = cpu_to_le32(fwspec->ids[i]); > + WARN_ON(viommu_send_req_sync(vdev->viommu, &req, sizeof(req))); > + } just a late question: don't you need to decrement vdomain's nr_endpoints? Thanks Eric > + vdev->vdomain = NULL; > +} > + > static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova, > phys_addr...
2018 Jan 15
1
[RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
...gt; + ret = _viommu_send_reqs_sync(viommu, req, nr, &sent); > + spin_unlock_irqrestore(&viommu->request_lock, flags); > + > + *nr_sent += sent; > + req += sent; > + nr -= sent; > + } while (ret == -EAGAIN); > + > + return ret; > +} > + > +/* > + * viommu_send_req_sync - send one request and wait for reply > + * > + * @top: pointer to a virtio_iommu_req_* structure > + * > + * Returns 0 if the request was successful, or an error number otherwise. No > + * distinction is done between transport and request errors. > + */ > +static int viommu_se...
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...t; In fact I don't really understand how it's supposed to > > work at all: you only sync when ring is full. > > So host may not have seen your map request if ring > > is not full. > > Why is it safe to use the address with a device then? > > viommu_map() calls viommu_send_req_sync(), which does the sync > immediately after adding the MAP request. > > Thanks, > Jean I see. So it happens on every request. Maybe you should clear event index then. This way if exits are disabled you know that host is processing the ring. Event index is good for when you don't ca...
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...t; In fact I don't really understand how it's supposed to > > work at all: you only sync when ring is full. > > So host may not have seen your map request if ring > > is not full. > > Why is it safe to use the address with a device then? > > viommu_map() calls viommu_send_req_sync(), which does the sync > immediately after adding the MAP request. > > Thanks, > Jean I see. So it happens on every request. Maybe you should clear event index then. This way if exits are disabled you know that host is processing the ring. Event index is good for when you don't ca...
2018 Mar 23
1
[PATCH 1/4] iommu: Add virtio-iommu driver
...gt; + ret = _viommu_send_reqs_sync(viommu, req, nr, &sent); > + spin_unlock_irqrestore(&viommu->request_lock, flags); > + > + *nr_sent += sent; > + req += sent; > + nr -= sent; > + } while (ret == -EAGAIN); > + > + return ret; > +} > + > +/* > + * viommu_send_req_sync - send one request and wait for reply > + * > + * @top: pointer to a virtio_iommu_req_* structure > + * > + * Returns 0 if the request was successful, or an error number otherwise. No > + * distinction is done between transport and request errors. > + */ > +static int viommu_se...
2018 Feb 14
0
[PATCH 1/4] iommu: Add virtio-iommu driver
...do { + spin_lock_irqsave(&viommu->request_lock, flags); + ret = _viommu_send_reqs_sync(viommu, req, nr, &sent); + spin_unlock_irqrestore(&viommu->request_lock, flags); + + *nr_sent += sent; + req += sent; + nr -= sent; + } while (ret == -EAGAIN); + + return ret; +} + +/* + * viommu_send_req_sync - send one request and wait for reply + * + * @top: pointer to a virtio_iommu_req_* structure + * + * Returns 0 if the request was successful, or an error number otherwise. No + * distinction is done between transport and request errors. + */ +static int viommu_send_req_sync(struct viommu_dev *viom...
2017 Nov 17
0
[RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
...do { + spin_lock_irqsave(&viommu->request_lock, flags); + ret = _viommu_send_reqs_sync(viommu, req, nr, &sent); + spin_unlock_irqrestore(&viommu->request_lock, flags); + + *nr_sent += sent; + req += sent; + nr -= sent; + } while (ret == -EAGAIN); + + return ret; +} + +/* + * viommu_send_req_sync - send one request and wait for reply + * + * @top: pointer to a virtio_iommu_req_* structure + * + * Returns 0 if the request was successful, or an error number otherwise. No + * distinction is done between transport and request errors. + */ +static int viommu_send_req_sync(struct viommu_dev *viom...
2017 Apr 07
0
[RFC PATCH linux] iommu: Add virtio-iommu driver
...t = 0; + do { + spin_lock_irqsave(&viommu->vq_lock, flags); + ret = _viommu_send_reqs_sync(viommu, req, nr, &sent); + spin_unlock_irqrestore(&viommu->vq_lock, flags); + + *nr_sent += sent; + req += sent; + nr -= sent; + } while (ret == -EAGAIN); + + return ret; +} + +/** + * viommu_send_req_sync - send one request and wait for reply + * + * @head_ptr: pointer to a virtio_iommu_req_* structure + * + * Returns 0 if the request was successful, or an error number otherwise. No + * distinction is done between transport and request errors. + */ +static int viommu_send_req_sync(struct viommu_dev...
2018 Dec 10
1
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...39;s supposed to > >>> work at all: you only sync when ring is full. > >>> So host may not have seen your map request if ring > >>> is not full. > >>> Why is it safe to use the address with a device then? > >> > >> viommu_map() calls viommu_send_req_sync(), which does the sync > >> immediately after adding the MAP request. > >> > >> Thanks, > >> Jean > > > > I see. So it happens on every request. Maybe you should clear > > event index then. This way if exits are disabled you know that > >...
2018 Feb 14
12
[PATCH 0/4] Add virtio-iommu driver
Implement the virtio-iommu driver following version 0.6 of the specification [1]. Previous version, RFCv2, was sent in November [2]. This version addresses Eric's comments and changes the device number. (Since last week I also tested and fixed the probe/release functions, they now use devm properly.) I did not include ACPI support because the next IORT specifications isn't ready yet (even
2018 Feb 14
12
[PATCH 0/4] Add virtio-iommu driver
Implement the virtio-iommu driver following version 0.6 of the specification [1]. Previous version, RFCv2, was sent in November [2]. This version addresses Eric's comments and changes the device number. (Since last week I also tested and fixed the probe/release functions, they now use devm properly.) I did not include ACPI support because the next IORT specifications isn't ready yet (even
2018 Oct 12
3
[PATCH v3 5/7] iommu: Add virtio-iommu driver
...;dev, "could not add request: %d\n", ret); > + spin_unlock_irqrestore(&viommu->request_lock, flags); > + > + return ret; > +} > + > +/* > + * Send a request and wait for it to complete. Return the request status (as an > + * errno) > + */ > +static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf, > + size_t len) > +{ > + int ret; > + unsigned long flags; > + > + spin_lock_irqsave(&viommu->request_lock, flags); > + > + ret = __viommu_add_req(viommu, buf, len, true); > + if (ret) { > + dev_dbg(viommu->dev, &quot...
2018 Oct 12
3
[PATCH v3 5/7] iommu: Add virtio-iommu driver
...;dev, "could not add request: %d\n", ret); > + spin_unlock_irqrestore(&viommu->request_lock, flags); > + > + return ret; > +} > + > +/* > + * Send a request and wait for it to complete. Return the request status (as an > + * errno) > + */ > +static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf, > + size_t len) > +{ > + int ret; > + unsigned long flags; > + > + spin_lock_irqsave(&viommu->request_lock, flags); > + > + ret = __viommu_add_req(viommu, buf, len, true); > + if (ret) { > + dev_dbg(viommu->dev, &quot...
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
On Tue, Nov 27, 2018 at 05:50:50PM +0000, Jean-Philippe Brucker wrote: > On 23/11/2018 22:02, Michael S. Tsirkin wrote: > >> +/* > >> + * __viommu_sync_req - Complete all in-flight requests > >> + * > >> + * Wait for all added requests to complete. When this function returns, all > >> + * requests that were in-flight at the time of the call have
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
On Tue, Nov 27, 2018 at 05:50:50PM +0000, Jean-Philippe Brucker wrote: > On 23/11/2018 22:02, Michael S. Tsirkin wrote: > >> +/* > >> + * __viommu_sync_req - Complete all in-flight requests > >> + * > >> + * Wait for all added requests to complete. When this function returns, all > >> + * requests that were in-flight at the time of the call have
2017 Jun 16
1
[virtio-dev] [RFC PATCH linux] iommu: Add virtio-iommu driver
...); > + ret = _viommu_send_reqs_sync(viommu, req, nr, &sent); > + spin_unlock_irqrestore(&viommu->vq_lock, flags); > + > + *nr_sent += sent; > + req += sent; > + nr -= sent; > + } while (ret == -EAGAIN); > + > + return ret; > +} > + > +/** > + * viommu_send_req_sync - send one request and wait for reply > + * > + * @head_ptr: pointer to a virtio_iommu_req_* structure > + * > + * Returns 0 if the request was successful, or an error number > +otherwise. No > + * distinction is done between transport and request errors. > + */ > +static in...