search for: iommu_dma_alloc

Displaying 16 results from an estimated 16 matches for "iommu_dma_alloc".

2020 Sep 15
0
[PATCH 17/18] dma-iommu: implement ->alloc_noncoherent
...++++++++++++++++++++++++++---- 1 file changed, 37 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 00a5b49248e334..c12c1dc43d312e 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -572,6 +572,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, * @size: Size of buffer in bytes * @dma_handle: Out argument for allocated DMA handle * @gfp: Allocation flags + * @prot: pgprot_t to use for the remapped mapping * @attrs: DMA attributes for this allocation * * If @size is less than PAGE_SIZE, then a full CPU...
2019 Dec 21
0
[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
...ng area. */ + void *padding_start = phys_to_virt(phys); + size_t padding_size = aligned_size; + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_TO_DEVICE || + dir == DMA_BIDIRECTIONAL)) { + padding_start += org_size; + padding_size -= org_size; + } - iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev); + memset(padding_start, 0, padding_size); + } +#endif + + iova = iommu_dma_alloc_iova(domain, aligned_size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; - if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) { - iommu_dma_free_iova(coo...
2019 Sep 08
0
[PATCH V6 4/5] iommu/dma-iommu: Use the dev->coherent_dma_mask
...prot, dma_addr_t dma_mask) { struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; @@ -484,7 +484,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, size = iova_align(iovad, size + iova_off); - iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); + iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; @@ -735,7 +735,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, int prot = dma_info_to_prot(dir, coherent, attrs);...
2020 Sep 25
2
[PATCH 17/18] dma-iommu: implement ->alloc_noncoherent
...e> > --- > drivers/iommu/dma-iommu.c | 41 +++++++++++++++++++++++++++++++++++---- > 1 file changed, 37 insertions(+), 4 deletions(-) > Sorry for being late to the party and thanks a lot for the patch. Please see my comments inline. [snip] > @@ -1052,6 +1055,34 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, > return cpu_addr; > } > > +#ifdef CONFIG_DMA_REMAP > +static void *iommu_dma_alloc_noncoherent(struct device *dev, size_t size, > + dma_addr_t *handle, enum dma_data_direction dir, gfp_t gfp) > +{ > + if (!gfpflags_allow_blocking(gfp))...
2016 Jun 02
0
[RFC v3 44/45] dma-mapping: Remove dma_get_attr
...->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING) return NULL; return rk_obj->kvaddr; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 6c1bda504fb1..08a1e2f3690f 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -306,7 +306,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, } else { size = ALIGN(size, min_size); } - if (dma_get_attr(DMA_ATTR_ALLOC_SINGLE_PAGES, attrs)) + if (attrs & DMA_ATTR_ALLOC_SINGLE_PAGES) alloc_sizes = min_size; count = PAGE_ALIGN(size) >> PAGE_SHIFT; diff --git a/drivers/media/...
2020 Sep 15
0
[PATCH 15/18] dma-mapping: add a new dma_alloc_pages API
...iommu.c b/drivers/iommu/dma-iommu.c index 5141d49a046baa..00a5b49248e334 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1120,6 +1120,8 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev) static const struct dma_map_ops iommu_dma_ops = { .alloc = iommu_dma_alloc, .free = iommu_dma_free, + .alloc_pages = dma_common_alloc_pages, + .free_pages = dma_common_free_pages, .mmap = iommu_dma_mmap, .get_sgtable = iommu_dma_get_sgtable, .map_page = iommu_dma_map_page, diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 7983c13...
2020 Sep 15
32
a saner API for allocating DMA addressable pages v3
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. As a follow up I
2020 Aug 19
0
[PATCH 19/28] dma-mapping: replace DMA_ATTR_NON_CONSISTENT with dma_{alloc, free}_pages
...iommu.c b/drivers/iommu/dma-iommu.c index 4959f5df21bd07..3da06df0f327c2 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1120,6 +1120,8 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev) static const struct dma_map_ops iommu_dma_ops = { .alloc = iommu_dma_alloc, .free = iommu_dma_free, + .alloc_pages = dma_common_alloc_pages, + .free_pages = dma_common_free_pages, .mmap = iommu_dma_mmap, .get_sgtable = iommu_dma_get_sgtable, .map_page = iommu_dma_map_page, diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 99aa804...
2020 Sep 14
20
a saner API for allocating DMA addressable pages v2
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the
2019 Sep 08
7
[PATCH v6 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V6: -add more details to the description of patch 001-iommu-amd-Remove-unnecessary-locking-from-AMD-iommu-.patch -rename handle_deferred_device to iommu_dma_deferred_attach -fix double tabs in 0003-iommu-dma-iommu-Handle-deferred-devices.patch V5: -Rebase on
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V4: -Rebase on top of linux-next -Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch -refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments v3: -rename dma_limit to dma_mask -exit
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V4: -Rebase on top of linux-next -Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch -refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments v3: -rename dma_limit to dma_mask -exit
2016 Jun 02
52
[RFC v3 00/45] dma-mapping: Use unsigned long for dma_attrs
Hi, This is third approach (complete this time) for replacing struct dma_attrs with unsigned long. The main patch (2/45) doing the change is split into many subpatches for easier review (3-43). They should be squashed together when applying. *Important:* Patchset is *only* build tested on allyesconfigs: ARM, ARM64, i386, x86_64 and powerpc. Please provide reviewes and tests for other
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a