Displaying 8 results from an estimated 8 matches for "dma_get_sgtable_attrs".
2016 Jun 02
0
[RFC v3 02/45] dma-mapping: Use unsigned long for dma_attrs
...c, h, s) dma_mmap_attrs(d, v, c, h, s, NULL)
+#define dma_mmap_coherent(d, v, c, h, s) dma_mmap_attrs(d, v, c, h, s, 0)
int
dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
@@ -338,7 +364,8 @@ dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
static inline int
dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, void *cpu_addr,
- dma_addr_t dma_addr, size_t size, struct dma_attrs *attrs)
+ dma_addr_t dma_addr, size_t size,
+ unsigned long attrs)
{
struct dma_map_ops *ops = get_dma_ops(dev);
BUG_ON(!ops);
@@ -348,7 +375,7 @@ dma_get_sgtable...
2020 Sep 15
0
[PATCH 14/18] dma-mapping: remove dma_cache_sync
...trs(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t gfp, unsigned long attrs);
void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle);
-void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
- enum dma_data_direction dir);
int dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs);
@@ -339,10 +335,6 @@ static inline void dmam_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle)
{
}
-static inline void dma_cache_sync(struct...
2016 Jun 02
0
[RFC v3 19/45] [media] dma-mapping: Use unsigned long for dma_attrs
...t;cookie,
- buf->dma_addr, buf->size, &buf->attrs);
+ buf->dma_addr, buf->size, buf->attrs);
if (ret) {
pr_err("Remapping memory failed, error: %d\n", ret);
@@ -377,7 +377,7 @@ static struct sg_table *vb2_dc_get_base_sgt(struct vb2_dc_buf *buf)
}
ret = dma_get_sgtable_attrs(buf->dev, sgt, buf->cookie, buf->dma_addr,
- buf->size, &buf->attrs);
+ buf->size, buf->attrs);
if (ret < 0) {
dev_err(buf->dev, "failed to get scatterlist from DMA API\n");
kfree(sgt);
@@ -426,15 +426,12 @@ static void vb2_dc_put_userptr(void *buf...
2016 Jun 02
52
[RFC v3 00/45] dma-mapping: Use unsigned long for dma_attrs
Hi,
This is third approach (complete this time) for replacing struct
dma_attrs with unsigned long.
The main patch (2/45) doing the change is split into many subpatches
for easier review (3-43). They should be squashed together when
applying.
*Important:* Patchset is *only* build tested on allyesconfigs: ARM,
ARM64, i386, x86_64 and powerpc. Please provide reviewes and tests
for other
2020 Aug 19
0
[PATCH 19/28] dma-mapping: replace DMA_ATTR_NON_CONSISTENT with dma_{alloc, free}_pages
..._t gfp);
+void dma_free_pages(struct device *dev, size_t size, void *vaddr,
+ dma_addr_t dma_handle, enum dma_data_direction dir);
+/* dma_cache_sync is deprecated: don't use in new code */
void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir);
int dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt,
@@ -339,6 +344,15 @@ static inline void dmam_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle)
{
}
+static inline void *dma_alloc_pages(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, enum dma_data_direction...
2020 Sep 14
20
a saner API for allocating DMA addressable pages v2
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
I'm still a
2020 Sep 15
32
a saner API for allocating DMA addressable pages v3
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
As a follow up I
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
I'm still a