Displaying 7 results from an estimated 7 matches for "max_mapping_size".
2019 Feb 07
0
[PATCH v7 3/5] dma: Introduce dma_max_mapping_size()
...0644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -195,6 +195,14 @@ Requesting the required mask does not alter the current mask. If you
wish to take advantage of it, you should issue a dma_set_mask()
call to set the mask to the value returned.
+::
+
+ size_t
+ dma_direct_max_mapping_size(struct device *dev);
+
+Returns the maximum size of a mapping for the device. The size parameter
+of the mapping functions like dma_map_single(), dma_map_page() and
+others should not be larger than the returned value.
Part Id - Streaming DMA mappings
--------------------------------
diff --git...
2019 Feb 07
5
[PATCH v7 0/5] Fix virtio-blk issue with SWIOTLB
...r than 256kb. When the
virtio-blk driver tries to read/write a block larger than that, the
allocation of the dma-handle fails and an IO error is reported.
Changes to v6 are:
- Fix build errors with CONFIG_SWIOTLB=n
Please review.
Thanks,
Joerg
Joerg Roedel (5):
swiotlb: Introduce swiotlb_max_mapping_size()
swiotlb: Add is_swiotlb_active() function
dma: Introduce dma_max_mapping_size()
virtio: Introduce virtio_max_dma_size()
virtio-blk: Consider virtio_max_dma_size() for maximum segment size
Documentation/DMA-API.txt | 8 ++++++++
drivers/block/virtio_blk.c | 10 ++++++----
drivers/v...
2020 Sep 15
0
[PATCH 14/18] dma-mapping: remove dma_cache_sync
...struct scatterlist *sg, int nents,
enum dma_data_direction dir);
- void (*cache_sync)(struct device *dev, void *vaddr, size_t size,
- enum dma_data_direction direction);
int (*dma_supported)(struct device *dev, u64 mask);
u64 (*get_required_mask)(struct device *dev);
size_t (*max_mapping_size)(struct device *dev);
@@ -254,8 +252,6 @@ void *dmam_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t gfp, unsigned long attrs);
void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle);
-void dma_cache_sync(struct device *dev, vo...
2020 Apr 28
0
[PATCH 5/5] virtio: Add bounce DMA ops
...ice *dev, dma_addr_t dev_addr,
> + size_t size, enum dma_data_direction dir, unsigned long attrs)
> +{
> + phys_addr_t addr = dev_addr + bounce_buf_paddr;
> +
> + _swiotlb_tbl_unmap_single(virtio_pool, dev, addr, size,
> + size, dir, attrs);
> +}
> +
> +size_t virtio_max_mapping_size(struct device *dev)
> +{
> + return VIRTIO_MAX_BOUNCE_SIZE;
> +}
> +
> +static const struct dma_map_ops virtio_dma_ops = {
> + .alloc = virtio_alloc_coherent,
> + .free = virtio_free_coherent,
> + .map_page = virtio_map_page,
> + .unmap_page = virtio_unmap_page,
>...
2020 Sep 14
20
a saner API for allocating DMA addressable pages v2
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
I'm still a
2020 Sep 15
32
a saner API for allocating DMA addressable pages v3
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
As a follow up I
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
I'm still a