Displaying 3 results from an estimated 3 matches for "cpu_needs_post_dma_flush".
2020 Aug 19
0
[PATCH 08/28] MIPS: make dma_sync_*_for_cpu a little less overzealous
...paddr, size_t size,
enum dma_data_direction dir)
{
- dma_sync_phys(paddr, size, dir);
+ dma_sync_phys(paddr, size, dir, true);
}
#ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU
@@ -119,16 +133,14 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
if (cpu_needs_post_dma_flush())
- dma_sync_phys(paddr, size, dir);
+ dma_sync_phys(paddr, size, dir, false);
}
#endif
void arch_dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction)
{
- BUG_ON(direction == DMA_NONE);
-
- dma_sync_virt(vaddr, size, direction);
+ dma_sync_virt...
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
I'm still a
2016 Jun 02
52
[RFC v3 00/45] dma-mapping: Use unsigned long for dma_attrs
Hi,
This is third approach (complete this time) for replacing struct
dma_attrs with unsigned long.
The main patch (2/45) doing the change is split into many subpatches
for easier review (3-43). They should be squashed together when
applying.
*Important:* Patchset is *only* build tested on allyesconfigs: ARM,
ARM64, i386, x86_64 and powerpc. Please provide reviewes and tests
for other