Displaying 20 results from an estimated 64 matches for "iommu_unmap".
2017 Aug 17
0
[PATCH 08/13] drm/nouveau/imem/gk20a: Use sychronized interface of the IOMMU-API
...a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c
@@ -322,8 +322,9 @@ gk20a_instobj_dtor_iommu(struct nvkm_memory *memory)
/* Unmap pages from GPU address space and free them */
for (i = 0; i < node->base.mem.size; i++) {
- iommu_unmap(imem->domain,
- (r->offset + i) << imem->iommu_pgshift, PAGE_SIZE);
+ iommu_unmap_sync(imem->domain,
+ (r->offset + i) << imem->iommu_pgshift,
+ PAGE_SIZE);
dma_unmap_page(dev, node->dma_addrs[i], PAGE_SIZE,
DMA_BIDIRECTIONAL);
__free...
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...y tests run fine when QEMU is bound to a single CPU, even
> though vcpu and viommu run in different threads
>
> > What to do then? Queue in software and wake up task.
>
> Unfortunately I can't do anything here, because IOMMU drivers can't
> sleep in the iommu_map() or iommu_unmap() path.
>
> The problem is the same
> for all IOMMU drivers. That's because the DMA API allows drivers to call
> some functions with interrupts disabled. For example
> Documentation/DMA-API-HOWTO.txt allows dma_alloc_coherent() and
> dma_unmap_single() to be called in interrup...
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...y tests run fine when QEMU is bound to a single CPU, even
> though vcpu and viommu run in different threads
>
> > What to do then? Queue in software and wake up task.
>
> Unfortunately I can't do anything here, because IOMMU drivers can't
> sleep in the iommu_map() or iommu_unmap() path.
>
> The problem is the same
> for all IOMMU drivers. That's because the DMA API allows drivers to call
> some functions with interrupts disabled. For example
> Documentation/DMA-API-HOWTO.txt allows dma_alloc_coherent() and
> dma_unmap_single() to be called in interrup...
2023 Mar 10
0
[PATCH] vhost-vdpa: cleanup memory maps when closing vdpa fds
...ost_vdpa_map
> > > iommu_map
> > >
> > > 3. kill QEMU
> > >
> > > 4. vhost_vdpa_release
> > > vhost_vdpa_free_domain
> > >
> > > In this case, we have no opportunity to invoke unpin_user_pages or
> > > iommu_unmap to free the memory.
> >
> > We do:
> >
> > vhost_vdpa_cleanup()
> > vhost_vdpa_remove_as()
> > vhost_vdpa_iotlb_unmap()
> > vhost_vdpa_pa_unmap()
> > unpin_user_pages()
> > vhost_vdp...
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
..., even
> >> though vcpu and viommu run in different threads
> >>
> >>> What to do then? Queue in software and wake up task.
> >>
> >> Unfortunately I can't do anything here, because IOMMU drivers can't
> >> sleep in the iommu_map() or iommu_unmap() path.
> >>
> >> The problem is the same
> >> for all IOMMU drivers. That's because the DMA API allows drivers to call
> >> some functions with interrupts disabled. For example
> >> Documentation/DMA-API-HOWTO.txt allows dma_alloc_coherent() and
>...
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
..., even
> >> though vcpu and viommu run in different threads
> >>
> >>> What to do then? Queue in software and wake up task.
> >>
> >> Unfortunately I can't do anything here, because IOMMU drivers can't
> >> sleep in the iommu_map() or iommu_unmap() path.
> >>
> >> The problem is the same
> >> for all IOMMU drivers. That's because the DMA API allows drivers to call
> >> some functions with interrupts disabled. For example
> >> Documentation/DMA-API-HOWTO.txt allows dma_alloc_coherent() and
>...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a
full flush in the dma-iommu path.
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This patch is useful for iommu drivers (in this case the intel iommu
driver) which need to wait for the ioTLB to be flushed before newly
free/unmapped page table pages can be freed. This way...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a
full flush in the dma-iommu path.
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This patch is useful for iommu drivers (in this case the intel iommu
driver) which need to wait for the ioTLB to be flushed before newly
free/unmapped page table pages can be freed. This way...
2018 Dec 10
1
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...mmu run in different threads
> >>>>
> >>>>> What to do then? Queue in software and wake up task.
> >>>>
> >>>> Unfortunately I can't do anything here, because IOMMU drivers can't
> >>>> sleep in the iommu_map() or iommu_unmap() path.
> >>>>
> >>>> The problem is the same
> >>>> for all IOMMU drivers. That's because the DMA API allows drivers to call
> >>>> some functions with interrupts disabled. For example
> >>>> Documentation/DMA-API-HOWTO...
2019 Dec 21
0
[PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This is useful for iommu drivers (in this case the intel iommu driver)
which need to wait for the ioTLB to be flushed before newly
free/unmapped page table pages can be freed. This way we can...
2020 Aug 17
1
[PATCH 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a
full flush in the dma-iommu path.
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This patch is useful for iommu drivers (in this case the intel iommu
driver) which need to wait for the ioTLB to be flushed before newly
free/unmapped page table pages can be freed. This way...
2020 Jun 28
2
[PATCH RFC 4/5] vhost-vdpa: support IOTLB batching hints
...> if (ops->dma_map)
> ops->dma_unmap(vdpa, iova, size);
> - else if (ops->set_map)
> - ops->set_map(vdpa, dev->iotlb);
> - else
> + else if (ops->set_map) {
> + if (!v->in_batch)
> + ops->set_map(vdpa, dev->iotlb);
> + } else
> iommu_unmap(v->domain, iova, size);
> }
>
> @@ -655,6 +661,8 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev,
> struct vhost_iotlb_msg *msg)
> {
> struct vhost_vdpa *v = container_of(dev, struct vhost_vdpa, vdev);
> + struct vdpa_device *vdpa = v->vdpa;
&...
2020 Jun 28
2
[PATCH RFC 4/5] vhost-vdpa: support IOTLB batching hints
...> if (ops->dma_map)
> ops->dma_unmap(vdpa, iova, size);
> - else if (ops->set_map)
> - ops->set_map(vdpa, dev->iotlb);
> - else
> + else if (ops->set_map) {
> + if (!v->in_batch)
> + ops->set_map(vdpa, dev->iotlb);
> + } else
> iommu_unmap(v->domain, iova, size);
> }
>
> @@ -655,6 +661,8 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev,
> struct vhost_iotlb_msg *msg)
> {
> struct vhost_vdpa *v = container_of(dev, struct vhost_vdpa, vdev);
> + struct vdpa_device *vdpa = v->vdpa;
&...
2020 Aug 18
0
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
On 2020-08-18 07:04, Tom Murphy wrote:
> Add a flush_iotlb_range to allow flushing of an iova range instead of a
> full flush in the dma-iommu path.
>
> Allow the iommu_unmap_fast to return newly freed page table pages and
> pass the freelist to queue_iova in the dma-iommu ops path.
>
> This patch is useful for iommu drivers (in this case the intel iommu
> driver) which need to wait for the ioTLB to be flushed before newly
> free/unmapped page table page...
2015 Feb 11
0
[PATCH v2 6/6] instmem/gk20a: add IOMMU support
...list_first_entry(&_node->mem->regions, struct nvkm_mm_node,
+ rl_entry);
+
+ /* clear bit 34 to unmap pages */
+ r->offset &= ~BIT(34 - priv->iommu_pgshift);
+
+ /* Unmap pages from GPU address space and free them */
+ for (i = 0; i < _node->mem->size; i++) {
+ iommu_unmap(priv->domain,
+ (r->offset + i) << priv->iommu_pgshift, PAGE_SIZE);
+ __free_page(node->pages[i]);
+ }
+
+ /* Release area from GPU address space */
+ mutex_lock(priv->mm_mutex);
+ nvkm_mm_free(priv->mm, &r);
+ mutex_unlock(priv->mm_mutex);
+}
+
+static void
+g...
2018 Nov 27
0
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...QEMU is bound to a single CPU, even
>> though vcpu and viommu run in different threads
>>
>>> What to do then? Queue in software and wake up task.
>>
>> Unfortunately I can't do anything here, because IOMMU drivers can't
>> sleep in the iommu_map() or iommu_unmap() path.
>>
>> The problem is the same
>> for all IOMMU drivers. That's because the DMA API allows drivers to call
>> some functions with interrupts disabled. For example
>> Documentation/DMA-API-HOWTO.txt allows dma_alloc_coherent() and
>> dma_unmap_single()...
2018 Nov 27
0
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...running another
task right? My tests run fine when QEMU is bound to a single CPU, even
though vcpu and viommu run in different threads
> What to do then? Queue in software and wake up task.
Unfortunately I can't do anything here, because IOMMU drivers can't
sleep in the iommu_map() or iommu_unmap() path. The problem is the same
for all IOMMU drivers. That's because the DMA API allows drivers to call
some functions with interrupts disabled. For example
Documentation/DMA-API-HOWTO.txt allows dma_alloc_coherent() and
dma_unmap_single() to be called in interrupt context.
> As kick is vm...
2020 Jun 29
1
[PATCH RFC 4/5] vhost-vdpa: support IOTLB batching hints
...> > - else if (ops->set_map)
> > > - ops->set_map(vdpa, dev->iotlb);
> > > - else
> > > + else if (ops->set_map) {
> > > + if (!v->in_batch)
> > > + ops->set_map(vdpa, dev->iotlb);
> > > + } else
> > > iommu_unmap(v->domain, iova, size);
> > > }
> > > @@ -655,6 +661,8 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev,
> > > struct vhost_iotlb_msg *msg)
> > > {
> > > struct vhost_vdpa *v = container_of(dev, struct vhost_vdpa, vdev);...
2018 Dec 10
0
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...;> though vcpu and viommu run in different threads
>>>>
>>>>> What to do then? Queue in software and wake up task.
>>>>
>>>> Unfortunately I can't do anything here, because IOMMU drivers can't
>>>> sleep in the iommu_map() or iommu_unmap() path.
>>>>
>>>> The problem is the same
>>>> for all IOMMU drivers. That's because the DMA API allows drivers to call
>>>> some functions with interrupts disabled. For example
>>>> Documentation/DMA-API-HOWTO.txt allows dma_alloc_coh...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...t ath11k_base *ab)
ret = iommu_map(iommu_dom, ab_ahb->fw.ce_paddr,
ab_ahb->fw.ce_paddr, ab_ahb->fw.ce_size,
- IOMMU_READ | IOMMU_WRITE);
+ IOMMU_READ | IOMMU_WRITE, GFP_KERNEL);
if (ret) {
ath11k_err(ab, "failed to map firmware CE region: %d\n", ret);
goto err_iommu_unmap;
diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
index 1cd4815a6dd197..80072b6b628358 100644
--- a/drivers/remoteproc/remoteproc_core.c
+++ b/drivers/remoteproc/remoteproc_core.c
@@ -643,7 +643,8 @@ static int rproc_handle_devmem(struct rproc *rproc, void *p...