Displaying 9 results from an estimated 9 matches for "arch_sync_dma_for_cpu".
2023 May 19
1
[PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront is enabling
On Fri, May 19, 2023 at 01:49:46PM +0100, Andrew Cooper wrote:
> > The alternative would be to finally merge swiotlb-xen into swiotlb, in
> > which case we might be able to do this later. Let me see what I can
> > do there.
>
> If that is an option, it would be great to reduce the special-cashing.
I think it's doable, and I've been wanting it for a while. I just
2020 Sep 01
2
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
On Wed, Aug 19, 2020 at 08:55:49AM +0200, Christoph Hellwig wrote:
> Use the proper modern API to transfer cache ownership for incoherent DMA.
>
> Signed-off-by: Christoph Hellwig <hch at lst.de>
> ---
> drivers/net/ethernet/seeq/sgiseeq.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/seeq/sgiseeq.c
2020 Sep 15
0
[PATCH 14/18] dma-mapping: remove dma_cache_sync
...nc,
.mmap = dma_common_mmap,
.get_sgtable = dma_common_get_sgtable,
};
diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c
index 97a14adbafc99c..f34ad1f09799f1 100644
--- a/arch/mips/mm/dma-noncoherent.c
+++ b/arch/mips/mm/dma-noncoherent.c
@@ -137,12 +137,6 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
}
#endif
-void arch_dma_cache_sync(struct device *dev, void *vaddr, size_t size,
- enum dma_data_direction direction)
-{
- dma_sync_virt_for_device(vaddr, size, direction);
-}
-
#ifdef CONFIG_DMA_PERDEV_COHERENT
void arch_setup_dma_ops(struct device *dev, u64...
2020 Sep 01
0
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...ct sgiseeq_rx_desc), DMA_BIDIRECTIONAL);
> > }
>
> this breaks ethernet on IP22 completely, but I haven't figured out why, yet.
the problem is that dma_sync_single_for_cpu() doesn't flush anything
for IP22, because it only flushes for CPUs which do speculation. So
either MIPS arch_sync_dma_for_cpu() should always flush or sgiseeq
needs to use a different sync funktion, when it wants to re-read descriptors
from memory.
Thomas.
--
Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
good idea. [ RFC1925, 2.3 ]
2020 Aug 19
0
[PATCH 08/28] MIPS: make dma_sync_*_for_cpu a little less overzealous
...e void dma_sync_phys(phys_addr_t paddr, size_t size,
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
- dma_sync_phys(paddr, size, dir);
+ dma_sync_phys(paddr, size, dir, true);
}
#ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU
@@ -119,16 +133,14 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
if (cpu_needs_post_dma_flush())
- dma_sync_phys(paddr, size, dir);
+ dma_sync_phys(paddr, size, dir, false);
}
#endif
void arch_dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction directi...
2020 Sep 01
3
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...NAL);
> > > }
> >
> > this breaks ethernet on IP22 completely, but I haven't figured out why, yet.
>
> the problem is that dma_sync_single_for_cpu() doesn't flush anything
> for IP22, because it only flushes for CPUs which do speculation. So
> either MIPS arch_sync_dma_for_cpu() should always flush or sgiseeq
> needs to use a different sync funktion, when it wants to re-read descriptors
> from memory.
Well, if IP22 doesn't speculate (which I'm pretty sure is the case),
dma_sync_single_for_cpu should indeeed be a no-op. But then there
also shouldn't be...
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
I'm still a
2020 Sep 14
20
a saner API for allocating DMA addressable pages v2
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
I'm still a
2020 Sep 15
32
a saner API for allocating DMA addressable pages v3
Hi all,
this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs
with a separate new dma_alloc_pages API, which is available on all
platforms. In addition to cleaning up the convoluted code path, this
ensures that other drivers that have asked for better support for
non-coherent DMA to pages with incurring bounce buffering over can finally
be properly supported.
As a follow up I