search for: dma_map_sg

Displaying 20 results from an estimated 71 matches for "dma_map_sg".

2018 Jul 30
1
[RFC 1/4] virtio: Define virtio_direct_dma_ops structure
...onst struct dma_map_ops virtio_direct_dma_ops = { > + .alloc = virtio_direct_alloc, > + .free = virtio_direct_free, > + .map_page = virtio_direct_map_page, > + .unmap_page = virtio_direct_unmap_page, > + .mapping_error = virtio_direct_mapping_error, > +}; This is missing a dma_map_sg implementation. In general this is mandatory for dma_ops. So either you implement it or explain in a common why you think you can skip it. > +EXPORT_SYMBOL(virtio_direct_dma_ops); EXPORT_SYMBOL_GPL like all virtio symbols, please.
2019 Dec 23
1
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
...gt;> Could someone from the intel team look at this? > > Let me get this straight. There is current API that on success always > returns the same number of elements as the input scatter gather > list. You propose to change the API so that this is no longer the case? No, the API for dma_map_sg() has always been that it may return fewer DMA segments than nents - see Documentation/DMA-API.txt (and otherwise, the return value would surely be a simple success/fail condition). Relying on a particular implementation behaviour has never been strictly correct, even if it does happen to be a...
2020 May 15
0
[PATCH v5 25/38] drm: virtio: fix common struct sg_table related issues
On Wed, May 13, 2020 at 03:32:32PM +0200, Marek Szyprowski wrote: > The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function > returns the number of the created entries in the DMA address space. > However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and > dma_unmap_sg must be called with the original number of the entries > passed to the dma_map_sg(). > > struct sg_table is a c...
2020 Sep 08
2
[PATCH] drm/virtio: drop quirks handling
...s_dma_quirk(vgdev->vdev); struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); struct scatterlist *sg; int si, ret; @@ -162,15 +161,11 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return -EINVAL; } - if (use_dma_api) { - shmem->mapped = dma_map_sg(vgdev->vdev->dev.parent, - shmem->pages->sgl, - shmem->pages->nents, - DMA_TO_DEVICE); - *nents = shmem->mapped; - } else { - *nents = shmem->pages->nents; - } + shmem->mapped = dma_map_sg(vgdev->vdev->dev.parent, + shmem->pages-&g...
2020 Sep 08
2
[PATCH] drm/virtio: drop quirks handling
...s_dma_quirk(vgdev->vdev); struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); struct scatterlist *sg; int si, ret; @@ -162,15 +161,11 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return -EINVAL; } - if (use_dma_api) { - shmem->mapped = dma_map_sg(vgdev->vdev->dev.parent, - shmem->pages->sgl, - shmem->pages->nents, - DMA_TO_DEVICE); - *nents = shmem->mapped; - } else { - *nents = shmem->pages->nents; - } + shmem->mapped = dma_map_sg(vgdev->vdev->dev.parent, + shmem->pages-&g...
2020 Feb 05
2
[PATCH 4/4] drm/virtio: move virtio_gpu_mem_entry initialization to new function
...+ ret = drm_gem_shmem_pin(&bo->base.base); + if (ret < 0) + return -EINVAL; + + bo->pages = drm_gem_shmem_get_sg_table(&bo->base.base); + if (bo->pages == NULL) { + drm_gem_shmem_unpin(&bo->base.base); + return -EINVAL; + } + + if (use_dma_api) { + bo->mapped = dma_map_sg(vgdev->vdev->dev.parent, + bo->pages->sgl, bo->pages->nents, + DMA_TO_DEVICE); + bo->nents = bo->mapped; + } else { + bo->nents = bo->pages->nents; + } + + bo->ents = kmalloc_array(bo->nents, sizeof(struct virtio_gpu_mem_entry), + GFP_KERNEL);...
2020 Feb 05
2
[PATCH 4/4] drm/virtio: move virtio_gpu_mem_entry initialization to new function
...+ ret = drm_gem_shmem_pin(&bo->base.base); + if (ret < 0) + return -EINVAL; + + bo->pages = drm_gem_shmem_get_sg_table(&bo->base.base); + if (bo->pages == NULL) { + drm_gem_shmem_unpin(&bo->base.base); + return -EINVAL; + } + + if (use_dma_api) { + bo->mapped = dma_map_sg(vgdev->vdev->dev.parent, + bo->pages->sgl, bo->pages->nents, + DMA_TO_DEVICE); + bo->nents = bo->mapped; + } else { + bo->nents = bo->pages->nents; + } + + bo->ents = kmalloc_array(bo->nents, sizeof(struct virtio_gpu_mem_entry), + GFP_KERNEL);...
2020 Feb 07
1
[PATCH v2 4/4] drm/virtio: move virtio_gpu_mem_entry initialization to new function
...ret; + + ret = drm_gem_shmem_pin(&bo->base.base); + if (ret < 0) + return -EINVAL; + + bo->pages = drm_gem_shmem_get_sg_table(&bo->base.base); + if (!bo->pages) { + drm_gem_shmem_unpin(&bo->base.base); + return -EINVAL; + } + + if (use_dma_api) { + bo->mapped = dma_map_sg(vgdev->vdev->dev.parent, + bo->pages->sgl, bo->pages->nents, + DMA_TO_DEVICE); + *nents = bo->mapped; + } else { + *nents = bo->pages->nents; + } + + *ents = kmalloc_array(*nents, sizeof(struct virtio_gpu_mem_entry), + GFP_KERNEL); + if (!(*ents)) { + D...
2020 Sep 08
0
[PATCH] drm/virtio: drop quirks handling
...rtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); > struct scatterlist *sg; > int si, ret; > @@ -162,15 +161,11 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, > return -EINVAL; > } > > - if (use_dma_api) { > - shmem->mapped = dma_map_sg(vgdev->vdev->dev.parent, > - shmem->pages->sgl, > - shmem->pages->nents, > - DMA_TO_DEVICE); > - *nents = shmem->mapped; > - } else { > - *nents = shmem->pages->nents; > - } > + shmem->mapped = dma_map_sg(vgdev->vdev-&g...
2016 Dec 08
1
[PATCH 1/2] virtio_ring: Do not call dma_map_page if sg is already mapped.
...me we even reach this code for rpmsg? Does vring_use_dma_api return true for rpmsg? > + if (sg_dma_address(sg)) { > + sg->length = sg_dma_len(sg); > + return sg_dma_address(sg); > + } > + Is there a rule that says 0 is not a valid address? > /* > * We can't use dma_map_sg, because we don't use scatterlists in > * the way it expects (we don't guarantee that the scatterlist > -- > 1.9.1
2016 Dec 08
1
[PATCH 1/2] virtio_ring: Do not call dma_map_page if sg is already mapped.
...me we even reach this code for rpmsg? Does vring_use_dma_api return true for rpmsg? > + if (sg_dma_address(sg)) { > + sg->length = sg_dma_len(sg); > + return sg_dma_address(sg); > + } > + Is there a rule that says 0 is not a valid address? > /* > * We can't use dma_map_sg, because we don't use scatterlists in > * the way it expects (we don't guarantee that the scatterlist > -- > 1.9.1
2020 Feb 05
0
[PATCH 4/4] drm/virtio: move virtio_gpu_mem_entry initialization to new function
...bo->pages = drm_gem_shmem_get_sg_table(&bo->base.base); > + if (bo->pages == NULL) { > + drm_gem_shmem_unpin(&bo->base.base); > + return -EINVAL; > + } > + > + if (use_dma_api) { > + bo->mapped = dma_map_sg(vgdev->vdev->dev.parent, > + bo->pages->sgl, bo->pages->nents, > + DMA_TO_DEVICE); > + bo->nents = bo->mapped; > + } else { > + bo->nents = bo-&g...
2020 Aug 20
2
[PATCH 05/28] media/v4l2: remove V4L2-FLAG-MEMORY-NON-CONSISTENT
...rge_boundary(dev); if (!merge_boundary || merge_boundary > chunk_size - 1) { /* can't coalesce */ return -EINVAL; } nents = DIV_ROUND_UP(total_size, chunk_size); sg = sgl_alloc(); for_each_sgl() { sg->page = __alloc_pages(get_order(chunk_size)) sg->len = chunk_size; } dma_map_sg(sg, DMA_ATTR_SKIP_CPU_SYNC); // you are guaranteed to get a single dma_addr out } Of course this still uses the scatterlist structure with its annoying mix of input and output parametes, so I'd rather not expose it as an official API at the DMA layer.
2016 Dec 08
3
[PATCH 0/2] Virtio ring works with DMA coherent memory
RPMsg uses dma_alloc_coherent() to allocate memory to shared with the remote. In this case, as there is no pages setup in the dma_alloc_coherent(), we cannot get the physical address back from the virtual address, and thus, we can set the sg_dma_addr to store the DMA address and mark it already DMA mapped. When virtio vring sees the sg_dma_addr is ready set, do not call dma_map_page(). The issue
2016 Dec 08
3
[PATCH 0/2] Virtio ring works with DMA coherent memory
RPMsg uses dma_alloc_coherent() to allocate memory to shared with the remote. In this case, as there is no pages setup in the dma_alloc_coherent(), we cannot get the physical address back from the virtual address, and thus, we can set the sg_dma_addr to store the DMA address and mark it already DMA mapped. When virtio vring sees the sg_dma_addr is ready set, do not call dma_map_page(). The issue
2015 Apr 20
3
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
...gt; aliasing by mapping the buffer only once to IOMMU. We also want to unmap the > buffer from IOMMU only once after all the instances of the buffer have been > unmapped, or only when the buffer is actually freed to cache IOMMU mappings. > > Doing IOMMU mapping for the whole buffer with dma_map_sg is also faster than > mapping page by page, because you can do only one TLB invalidate in the end > of the loop instead of after every page if you use dma_map_single. > > All of these would talk for having IOMMU and GMMU mapping loops separate. > This patch set does not implement bot...
2015 Apr 17
2
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
On Thu, Apr 16, 2015 at 8:06 PM, Vince Hsu <vinceh at nvidia.com> wrote: > This patch implements a way to aggregate the small pages and make them be > mapped as big page(s) by utilizing the platform IOMMU if supported. And then > we can enable compression support for these big pages later. > > Signed-off-by: Vince Hsu <vinceh at nvidia.com> > --- >
2016 Dec 06
0
[RFC LINUX PATCH 1/2] virtio_ring: Do not call dma_map_page if sg is already mapped.
...(const struct vring_virtqueue *vq, if (!vring_use_dma_api(vq->vq.vdev)) return (dma_addr_t)sg_phys(sg); + /* If the sg is already mapped, return the DMA address */ + if (sg_dma_address(sg)) { + sg->length = sg_dma_len(sg); + return sg_dma_address(sg); + } + /* * We can't use dma_map_sg, because we don't use scatterlists in * the way it expects (we don't guarantee that the scatterlist -- 1.9.1
2016 Dec 08
0
[PATCH 1/2] virtio_ring: Do not call dma_map_page if sg is already mapped.
...(const struct vring_virtqueue *vq, if (!vring_use_dma_api(vq->vq.vdev)) return (dma_addr_t)sg_phys(sg); + /* If the sg is already mapped, return the DMA address */ + if (sg_dma_address(sg)) { + sg->length = sg_dma_len(sg); + return sg_dma_address(sg); + } + /* * We can't use dma_map_sg, because we don't use scatterlists in * the way it expects (we don't guarantee that the scatterlist -- 1.9.1
2019 Dec 23
0
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
...unately. > > Could someone from the intel team look at this? Let me get this straight. There is current API that on success always returns the same number of elements as the input scatter gather list. You propose to change the API so that this is no longer the case? A quick check of various dma_map_sg() calls in the kernel seems to indicate checking for 0 for errors and then ignoring the non-zero return is a common pattern. Are you sure it's okay to make the change you're proposing? Anyway, due to the time of year and all, I'd like to ask you to file a bug against i915 at [1] so thi...