search for: viommu_unmap

Displaying 20 results from an estimated 60 matches for "viommu_unmap".

Did you mean: iommu_unmap
2017 Oct 25
0
[RFC] virtio-iommu version 0.5
...gle descriptor, so the interface stays the same. > > So for non SVM case, > guest virtio-iommu driver will program the context descriptor such a way that, > ASID is not in shared set(ASET = 1b) and hence Physical IOMMU TLB invalidates would get triggered > from software for every viommu_unmap(in guest kernel) through Qemu(using vfio ioctls) ? That's right. viommu_unmap will send an INVALIDATE request on the virtio request queue, forwarded to the driver via a VFIO ioctl. > And for SVM case, ASID would be in shared set and explicit TLB invalidates > are not required from sof...
2017 Oct 25
1
[RFC] virtio-iommu version 0.5
...table but with a > single descriptor, so the interface stays the same. So for non SVM case, guest virtio-iommu driver will program the context descriptor such a way that, ASID is not in shared set(ASET = 1b) and hence Physical IOMMU TLB invalidates would get triggered from software for every viommu_unmap(in guest kernel) through Qemu(using vfio ioctls) ? And for SVM case, ASID would be in shared set and explicit TLB invalidates are not required from software ? But with the second > solution, nested with SMMUv2 isn't supported since it doesn't have context > tables. The second solu...
2017 Oct 25
1
[RFC] virtio-iommu version 0.5
...table but with a > single descriptor, so the interface stays the same. So for non SVM case, guest virtio-iommu driver will program the context descriptor such a way that, ASID is not in shared set(ASET = 1b) and hence Physical IOMMU TLB invalidates would get triggered from software for every viommu_unmap(in guest kernel) through Qemu(using vfio ioctls) ? And for SVM case, ASID would be in shared set and explicit TLB invalidates are not required from software ? But with the second > solution, nested with SMMUv2 isn't supported since it doesn't have context > tables. The second solu...
2017 Oct 09
0
[virtio-dev] [RFC] virtio-iommu version 0.4
...t;flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_READ); + + if (prot & IOMMU_WRITE) + req->flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_WRITE); + + ret = viommu_send_req_sync(vdomain->viommu, req); + kfree(req); if (ret) viommu_tlb_unmap(vdomain, iova, size); @@ -587,11 +602,7 @@ static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, int ret; size_t unmapped; struct viommu_domain *vdomain = to_viommu_domain(domain); - struct virtio_iommu_req_unmap req = { - .head.type = VIRTIO_IOMMU_T_UNMAP, - .address_space = cpu_to_le32(vdomain->id), - .virt_addr = cpu_to_le64(iova)...
2018 Mar 23
1
[PATCH 1/4] iommu: Add virtio-iommu driver
...he buffer for the unmap request. When > + * the returned size is greater than zero, if a mapping is returned, the > + * caller must free it. This "free multiple mappings except maybe hand one of them off to the caller" interface is really unintuitive. AFAICS it's only used by viommu_unmap() to grab mapping->req, but that doesn't seem to care about mapping itself, so I wonder whether it wouldn't make more sense to just have a global kmem_cache of struct virtio_iommu_req_unmap for that and avoid a lot of complexity... > + * > + * On success, returns the number of...
2020 Apr 14
0
[PATCH v2 20/33] iommu/virtio: Convert to probe/release_device() call-backs
...return; vdev = dev_iommu_priv_get(dev); - iommu_group_remove_device(dev); - iommu_device_unlink(&vdev->viommu->iommu, dev); generic_iommu_put_resv_regions(dev, &vdev->resv_regions); kfree(vdev); } @@ -960,8 +939,8 @@ static struct iommu_ops viommu_ops = { .unmap = viommu_unmap, .iova_to_phys = viommu_iova_to_phys, .iotlb_sync = viommu_iotlb_sync, - .add_device = viommu_add_device, - .remove_device = viommu_remove_device, + .probe_device = viommu_probe_device, + .release_device = viommu_release_device, .device_group = viommu_device_group, .get_resv_regions...
2017 Apr 07
0
[RFC PATCH linux] iommu: Add virtio-iommu driver
...IOMMU_WRITE) + req.flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_WRITE); + + ret = viommu_tlb_map(vdomain, iova, paddr, size); + if (ret) + return ret; + + ret = viommu_send_req_sync(vdomain->viommu, &req); + if (ret) + viommu_tlb_unmap(vdomain, iova, size); + + return ret; +} + +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + int ret; + size_t unmapped; + struct viommu_domain *vdomain = to_viommu_domain(domain); + struct virtio_iommu_req_unmap req = { + .head.type = VIRTIO_IOMMU_T_UNMAP, + .address_space = cpu_to_le32(vdomain->id), + .virt_a...
2017 Jun 16
1
[virtio-dev] [RFC PATCH linux] iommu: Add virtio-iommu driver
...E); > + > + ret = viommu_tlb_map(vdomain, iova, paddr, size); > + if (ret) > + return ret; > + > + ret = viommu_send_req_sync(vdomain->viommu, &req); > + if (ret) > + viommu_tlb_unmap(vdomain, iova, size); > + > + return ret; > +} > + > +static size_t viommu_unmap(struct iommu_domain *domain, unsigned > long iova, > + size_t size) > +{ > + int ret; > + size_t unmapped; > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + struct virtio_iommu_req_unmap req = { > + .head.type = VIRTIO_IOMMU_T_UNMAP, > + .address_sp...
2017 Jun 16
1
[virtio-dev] [RFC PATCH linux] iommu: Add virtio-iommu driver
...E); > + > + ret = viommu_tlb_map(vdomain, iova, paddr, size); > + if (ret) > + return ret; > + > + ret = viommu_send_req_sync(vdomain->viommu, &req); > + if (ret) > + viommu_tlb_unmap(vdomain, iova, size); > + > + return ret; > +} > + > +static size_t viommu_unmap(struct iommu_domain *domain, unsigned > long iova, > + size_t size) > +{ > + int ret; > + size_t unmapped; > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + struct virtio_iommu_req_unmap req = { > + .head.type = VIRTIO_IOMMU_T_UNMAP, > + .address_sp...
2018 Oct 12
3
[PATCH v3 5/7] iommu: Add virtio-iommu driver
...o_le32(flags), > + }; > + > + if (!vdomain->nr_endpoints) > + return 0; > + > + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); > + if (ret) > + viommu_del_mappings(vdomain, iova, size); > + > + return ret; > +} > + > +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, > + size_t size) > +{ > + int ret = 0; > + size_t unmapped; > + struct virtio_iommu_req_unmap unmap; > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + > + unmapped = viommu_del_mappings(vdomain, iova, size...
2018 Oct 12
3
[PATCH v3 5/7] iommu: Add virtio-iommu driver
...o_le32(flags), > + }; > + > + if (!vdomain->nr_endpoints) > + return 0; > + > + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); > + if (ret) > + viommu_del_mappings(vdomain, iova, size); > + > + return ret; > +} > + > +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, > + size_t size) > +{ > + int ret = 0; > + size_t unmapped; > + struct virtio_iommu_req_unmap unmap; > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + > + unmapped = viommu_del_mappings(vdomain, iova, size...
2018 Nov 22
0
[PATCH v5 5/7] iommu: Add virtio-iommu driver
..., + .virt_end = cpu_to_le64(iova + size - 1), + .flags = cpu_to_le32(flags), + }; + + if (!vdomain->nr_endpoints) + return 0; + + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); + if (ret) + viommu_del_mappings(vdomain, iova, size); + + return ret; +} + +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + int ret = 0; + size_t unmapped; + struct virtio_iommu_req_unmap unmap; + struct viommu_domain *vdomain = to_viommu_domain(domain); + + unmapped = viommu_del_mappings(vdomain, iova, size); + if (unmapped < size) + return 0...
2018 Nov 15
0
[PATCH v4 5/7] iommu: Add virtio-iommu driver
..., + .virt_end = cpu_to_le64(iova + size - 1), + .flags = cpu_to_le32(flags), + }; + + if (!vdomain->nr_endpoints) + return 0; + + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); + if (ret) + viommu_del_mappings(vdomain, iova, size); + + return ret; +} + +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + int ret = 0; + size_t unmapped; + struct virtio_iommu_req_unmap unmap; + struct viommu_domain *vdomain = to_viommu_domain(domain); + + unmapped = viommu_del_mappings(vdomain, iova, size); + if (unmapped < size) + return 0...
2018 Jun 21
0
[PATCH v2 2/5] iommu: Add virtio-iommu driver
..., + .virt_end = cpu_to_le64(iova + size - 1), + .flags = cpu_to_le32(flags), + }; + + if (!vdomain->nr_endpoints) + return 0; + + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); + if (ret) + viommu_del_mappings(vdomain, iova, size); + + return ret; +} + +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + int ret = 0; + size_t unmapped; + struct virtio_iommu_req_unmap unmap; + struct viommu_domain *vdomain = to_viommu_domain(domain); + + unmapped = viommu_del_mappings(vdomain, iova, size); + if (unmapped < size) + return 0...
2018 Oct 12
0
[PATCH v3 5/7] iommu: Add virtio-iommu driver
..., + .virt_end = cpu_to_le64(iova + size - 1), + .flags = cpu_to_le32(flags), + }; + + if (!vdomain->nr_endpoints) + return 0; + + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); + if (ret) + viommu_del_mappings(vdomain, iova, size); + + return ret; +} + +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + int ret = 0; + size_t unmapped; + struct virtio_iommu_req_unmap unmap; + struct viommu_domain *vdomain = to_viommu_domain(domain); + + unmapped = viommu_del_mappings(vdomain, iova, size); + if (unmapped < size) + return 0...
2018 Feb 14
0
[PATCH 1/4] iommu: Add virtio-iommu driver
...+ .virt_end = cpu_to_le64(iova + size - 1), + .flags = cpu_to_le32(flags), + }; + + if (!vdomain->endpoints) + return 0; + + ret = viommu_send_req_sync(vdomain->viommu, &mapping->req); + if (ret) + viommu_del_mappings(vdomain, iova, size, NULL); + + return ret; +} + +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + int ret = 0; + size_t unmapped; + struct viommu_mapping *mapping = NULL; + struct viommu_domain *vdomain = to_viommu_domain(domain); + + unmapped = viommu_del_mappings(vdomain, iova, size, &mapping); + if (unmapped < s...
2019 May 30
0
[PATCH v8 5/7] iommu: Add virtio-iommu driver
..., + .virt_end = cpu_to_le64(iova + size - 1), + .flags = cpu_to_le32(flags), + }; + + if (!vdomain->nr_endpoints) + return 0; + + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); + if (ret) + viommu_del_mappings(vdomain, iova, size); + + return ret; +} + +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + int ret = 0; + size_t unmapped; + struct virtio_iommu_req_unmap unmap; + struct viommu_domain *vdomain = to_viommu_domain(domain); + + unmapped = viommu_del_mappings(vdomain, iova, size); + if (unmapped < size) + return 0...
2018 Nov 08
0
[PATCH v3 5/7] iommu: Add virtio-iommu driver
...f (!vdomain->nr_endpoints) >> + return 0; >> + >> + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); >> + if (ret) >> + viommu_del_mappings(vdomain, iova, size); >> + >> + return ret; >> +} >> + >> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, >> + size_t size) >> +{ >> + int ret = 0; >> + size_t unmapped; >> + struct virtio_iommu_req_unmap unmap; >> + struct viommu_domain *vdomain = to_viommu_domain(domain); >> + >> + unmapped = viommu_...
2018 Nov 23
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...o_le32(flags), > + }; > + > + if (!vdomain->nr_endpoints) > + return 0; > + > + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); > + if (ret) > + viommu_del_mappings(vdomain, iova, size); > + > + return ret; > +} > + > +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, > + size_t size) > +{ > + int ret = 0; > + size_t unmapped; > + struct virtio_iommu_req_unmap unmap; > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + > + unmapped = viommu_del_mappings(vdomain, iova, size...
2018 Nov 23
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...o_le32(flags), > + }; > + > + if (!vdomain->nr_endpoints) > + return 0; > + > + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); > + if (ret) > + viommu_del_mappings(vdomain, iova, size); > + > + return ret; > +} > + > +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, > + size_t size) > +{ > + int ret = 0; > + size_t unmapped; > + struct virtio_iommu_req_unmap unmap; > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + > + unmapped = viommu_del_mappings(vdomain, iova, size...