Displaying 20 results from an estimated 103 matches for "unmap_pages".
Did you mean:
unmap_page
2020 Feb 04
2
[PATCH 5/5] vdpasim: vDPA device simulator
On 2020/2/4 ??4:21, Zhu Lingshan wrote:
>> +static const struct dma_map_ops vdpasim_dma_ops = {
>> +??? .map_page = vdpasim_map_page,
>> +??? .unmap_page = vdpasim_unmap_page,
>> +??? .alloc = vdpasim_alloc_coherent,
>> +??? .free = vdpasim_free_coherent,
>> +};
>> +
>
> Hey Jason,
>
> IMHO, it would be nice if dma_ops of the parent device
2020 Feb 04
2
[PATCH 5/5] vdpasim: vDPA device simulator
On 2020/2/4 ??4:21, Zhu Lingshan wrote:
>> +static const struct dma_map_ops vdpasim_dma_ops = {
>> +??? .map_page = vdpasim_map_page,
>> +??? .unmap_page = vdpasim_unmap_page,
>> +??? .alloc = vdpasim_alloc_coherent,
>> +??? .free = vdpasim_free_coherent,
>> +};
>> +
>
> Hey Jason,
>
> IMHO, it would be nice if dma_ops of the parent device
2020 Sep 15
0
[PATCH 15/18] dma-mapping: add a new dma_alloc_pages API
...es = dma_common_free_pages,
};
/**
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 4a37d8f4de9d9d..9291023e9469c2 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -668,6 +668,8 @@ const struct dma_map_ops s390_pci_dma_ops = {
.unmap_page = s390_dma_unmap_pages,
.mmap = dma_common_mmap,
.get_sgtable = dma_common_get_sgtable,
+ .alloc_pages = dma_common_alloc_pages,
+ .free_pages = dma_common_free_pages,
/* dma_supported is unconditionally true without a callback */
};
EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
diff --git a/arch/x86/kernel/amd_gart_64.c...
2013 Oct 17
42
[PATCH v8 0/19] enable swiotlb-xen on arm and arm64
Hi all,
this patch series enables xen-swiotlb on arm and arm64.
It has been heavily reworked compared to the previous versions in order
to achieve better performances and to address review comments.
We are not using dma_mark_clean to ensure coherency anymore. We call the
platform implementation of map_page and unmap_page.
We assume that dom0 has been mapped 1:1 (physical address ==
machine
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
Change the sg_alloc_table_from_pages() allocation that was hardwired to
GFP_KERNEL to use the gfp parameter like the other allocations in this
function.
Auditing says this is never called from an atomic context, so it is safe
as is, but reads wrong.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
drivers/iommu/dma-iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
Change the sg_alloc_table_from_pages() allocation that was hardwired to
GFP_KERNEL to use the gfp parameter like the other allocations in this
function.
Auditing says this is never called from an atomic context, so it is safe
as is, but reads wrong.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
drivers/iommu/dma-iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
2018 Jul 30
1
[RFC 1/4] virtio: Define virtio_direct_dma_ops structure
> +/*
> + * Virtio direct mapping DMA API operations structure
> + *
> + * This defines DMA API structure for all virtio devices which would not
> + * either bring in their own DMA OPS from architecture or they would not
> + * like to use architecture specific IOMMU based DMA OPS because QEMU
> + * expects GPA instead of an IOVA in absence of VIRTIO_F_IOMMU_PLATFORM.
> + */
2023 Sep 04
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...; switch (cap) {
> case IOMMU_CAP_CACHE_COHERENCY:
> return true;
> + case IOMMU_CAP_DEFERRED_FLUSH:
> + return true;
> default:
> return false;
> }
> @@ -1069,6 +1080,7 @@ static struct iommu_ops viommu_ops = {
> .map_pages = viommu_map_pages,
> .unmap_pages = viommu_unmap_pages,
> .iova_to_phys = viommu_iova_to_phys,
> + .flush_iotlb_all = viommu_flush_iotlb_all,
> .iotlb_sync = viommu_iotlb_sync,
> .iotlb_sync_map = viommu_iotlb_sync_map,
> .free = viommu_domain_free,
>
> --
> 2.39.2
>
2013 Nov 11
0
[GIT PULL] (xen) stable/for-linus-3.13-rc0-tag
Hey Linus,
Please git pull the following tag:
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.13-rc0-tag
which has tons of fixes and two major features which are concentrated around
the Xen SWIOTLB library.
The short <blurb> is that the tracing facility (just one function) has been
added to SWIOTLB to make it easier to track I/O progress. Additionally under
2016 Jun 02
0
[RFC v3 02/45] dma-mapping: Use unsigned long for dma_attrs
The dma-mapping core and the implementations do not change the
DMA attributes passed by pointer. Thus the pointer can point to const
data. However the attributes do not have to be a bitfield. Instead
unsigned long will do fine:
1. This is just simpler. Both in terms of reading the code and setting
attributes. Instead of initializing local attributes on the stack
and passing pointer to
2020 Aug 19
0
[PATCH 19/28] dma-mapping: replace DMA_ATTR_NON_CONSISTENT with dma_{alloc, free}_pages
...es = dma_common_free_pages,
};
/**
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 64b1399a73f04d..44004f790bdc44 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -670,6 +670,8 @@ const struct dma_map_ops s390_pci_dma_ops = {
.unmap_page = s390_dma_unmap_pages,
.mmap = dma_common_mmap,
.get_sgtable = dma_common_get_sgtable,
+ .alloc_pages = dma_common_alloc_pages,
+ .free_pages = dma_common_free_pages,
/* dma_supported is unconditionally true without a callback */
};
EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
diff --git a/arch/x86/kernel/amd_gart_64.c...
2023 Sep 06
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...FLUSH:
> > > > + return true;
> > > > default:
> > > > return false;
> > > > }
> > > > @@ -1069,6 +1080,7 @@ static struct iommu_ops viommu_ops = {
> > > > .map_pages = viommu_map_pages,
> > > > .unmap_pages = viommu_unmap_pages,
> > > > .iova_to_phys = viommu_iova_to_phys,
> > > > + .flush_iotlb_all = viommu_flush_iotlb_all,
> > > > .iotlb_sync = viommu_iotlb_sync,
> > > > .iotlb_sync_map = viommu_iotlb_sync_map,
> > > >...
2023 Sep 06
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...FLUSH:
> > > > + return true;
> > > > default:
> > > > return false;
> > > > }
> > > > @@ -1069,6 +1080,7 @@ static struct iommu_ops viommu_ops = {
> > > > .map_pages = viommu_map_pages,
> > > > .unmap_pages = viommu_unmap_pages,
> > > > .iova_to_phys = viommu_iova_to_phys,
> > > > + .flush_iotlb_all = viommu_flush_iotlb_all,
> > > > .iotlb_sync = viommu_iotlb_sync,
> > > > .iotlb_sync_map = viommu_iotlb_sync_map,
> > > >...
2020 Feb 04
0
[PATCH 5/5] vdpasim: vDPA device simulator
On Tue, Feb 04, 2020 at 04:28:27PM +0800, Jason Wang wrote:
>
> On 2020/2/4 ??4:21, Zhu Lingshan wrote:
> > > +static const struct dma_map_ops vdpasim_dma_ops = {
> > > +??? .map_page = vdpasim_map_page,
> > > +??? .unmap_page = vdpasim_unmap_page,
> > > +??? .alloc = vdpasim_alloc_coherent,
> > > +??? .free = vdpasim_free_coherent,
> >
2023 Jan 20
0
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
...n for any level of
the API internals to pick up as appropriate, rather than propagate
per-call gfp flags everywhere. As it stands we're still missing
potential pagetable and other domain-related allocations by drivers in
.attach_dev and even (in probably-shouldn't-really-happen cases)
.unmap_pages...
Thanks,
Robin.
> Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
> ---
> drivers/iommu/dma-iommu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 8c2788633c1766..e4bf1bb159...
2023 Sep 04
0
[PATCH 1/2] iommu/virtio: Make use of ops->iotlb_sync_map
...> + return viommu_sync_req(vdomain->viommu);
> +}
> +
> static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> {
> struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> @@ -1058,6 +1070,7 @@ static struct iommu_ops viommu_ops = {
> .unmap_pages = viommu_unmap_pages,
> .iova_to_phys = viommu_iova_to_phys,
> .iotlb_sync = viommu_iotlb_sync,
> + .iotlb_sync_map = viommu_iotlb_sync_map,
> .free = viommu_domain_free,
> }
> };
>
> --
> 2.39.2
>
2013 Dec 09
1
[PATCH] xen/arm64: do not call the swiotlb functions twice
On arm64 the dma_map_ops implementation is based on the swiotlb.
swiotlb-xen, used by default in dom0 on Xen, is also based on the
swiotlb.
Avoid calling into the default arm64 dma_map_ops functions from
xen_dma_map_page, xen_dma_unmap_page, xen_dma_sync_single_for_cpu, and
xen_dma_sync_single_for_device otherwise we end up calling into the
swiotlb twice.
When arm64 gets a non-swiotlb based
2018 Jul 20
0
[RFC 1/4] virtio: Define virtio_direct_dma_ops structure
Current implementation of DMA API inside virtio core calls device's DMA OPS
callback functions when the flag VIRTIO_F_IOMMU_PLATFORM flag is set. But
in absence of the flag, virtio core falls back calling basic transformation
of the incoming SG addresses as GPA. Going forward virtio should only call
DMA API based transformations generating either GPA or IOVA depending on
QEMU expectations
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
Support transparent huge page migration to ZONE_DEVICE private memory.
A new selection flag (MIGRATE_VMA_SELECT_COMPOUND) is added to request
THP migration. Otherwise, THPs are split when filling in the source PFN
array. A new flag (MIGRATE_PFN_COMPOUND) is added to the source PFN array
to indicate a huge page can be migrated. If the device driver can allocate
a huge page, it sets the
2023 May 15
3
[PATCH v2 0/2] iommu/virtio: Fixes
One fix reported by Akihiko, and another found while going over the
driver.
Jean-Philippe Brucker (2):
iommu/virtio: Detach domain on endpoint release
iommu/virtio: Return size mapped for a detached domain
drivers/iommu/virtio-iommu.c | 57 ++++++++++++++++++++++++++----------
1 file changed, 41 insertions(+), 16 deletions(-)
--
2.40.0