search for: dma_sync_single_for_cpu

Displaying 20 results from an estimated 29 matches for "dma_sync_single_for_cpu".

2020 Sep 01
2
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...struct sgiseeq_private { > > static inline void dma_sync_desc_cpu(struct net_device *dev, void *addr) > { > - dma_cache_sync(dev->dev.parent, addr, sizeof(struct sgiseeq_rx_desc), > - DMA_FROM_DEVICE); > + struct sgiseeq_private *sp = netdev_priv(dev); > + > + dma_sync_single_for_cpu(dev->dev.parent, VIRT_TO_DMA(sp, addr), > + sizeof(struct sgiseeq_rx_desc), DMA_BIDIRECTIONAL); > } > > static inline void dma_sync_desc_dev(struct net_device *dev, void *addr) > { > - dma_cache_sync(dev->dev.parent, addr, sizeof(struct sgiseeq_rx_desc), > -...
2020 Sep 01
3
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
..._sync_desc_cpu(struct net_device *dev, void *addr) > > > { > > > - dma_cache_sync(dev->dev.parent, addr, sizeof(struct sgiseeq_rx_desc), > > > - DMA_FROM_DEVICE); > > > + struct sgiseeq_private *sp = netdev_priv(dev); > > > + > > > + dma_sync_single_for_cpu(dev->dev.parent, VIRT_TO_DMA(sp, addr), > > > + sizeof(struct sgiseeq_rx_desc), DMA_BIDIRECTIONAL); > > > } > > > > > > static inline void dma_sync_desc_dev(struct net_device *dev, void *addr) > > > { > > > - dma_cache_sync(dev->dev...
2020 Sep 01
0
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...; > static inline void dma_sync_desc_cpu(struct net_device *dev, void *addr) > > { > > - dma_cache_sync(dev->dev.parent, addr, sizeof(struct sgiseeq_rx_desc), > > - DMA_FROM_DEVICE); > > + struct sgiseeq_private *sp = netdev_priv(dev); > > + > > + dma_sync_single_for_cpu(dev->dev.parent, VIRT_TO_DMA(sp, addr), > > + sizeof(struct sgiseeq_rx_desc), DMA_BIDIRECTIONAL); > > } > > > > static inline void dma_sync_desc_dev(struct net_device *dev, void *addr) > > { > > - dma_cache_sync(dev->dev.parent, addr, sizeof(struct s...
2020 Sep 15
0
[PATCH 08/18] dma-mapping: add a new dma_alloc_noncoherent API
Add a new API to allocate and free memory that is guaranteed to be addressable by a device, but which potentially is not cache coherent for DMA. To transfer ownership to and from the device, the existing streaming DMA API calls dma_sync_single_for_device and dma_sync_single_for_cpu must be used. For now the new calls are implemented on top of dma_alloc_attrs just like the old-noncoherent API, but once all drivers are switched to the new API it will be replaced with a better working implementation that is available on all architectures. Signed-off-by: Christoph Hellwig <h...
2020 Aug 19
0
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
...et/seeq/sgiseeq.c @@ -112,14 +112,18 @@ struct sgiseeq_private { static inline void dma_sync_desc_cpu(struct net_device *dev, void *addr) { - dma_cache_sync(dev->dev.parent, addr, sizeof(struct sgiseeq_rx_desc), - DMA_FROM_DEVICE); + struct sgiseeq_private *sp = netdev_priv(dev); + + dma_sync_single_for_cpu(dev->dev.parent, VIRT_TO_DMA(sp, addr), + sizeof(struct sgiseeq_rx_desc), DMA_BIDIRECTIONAL); } static inline void dma_sync_desc_dev(struct net_device *dev, void *addr) { - dma_cache_sync(dev->dev.parent, addr, sizeof(struct sgiseeq_rx_desc), - DMA_TO_DEVICE); + struct sgiseeq_...
2020 Sep 01
0
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
On Tue, Sep 01, 2020 at 07:16:27PM +0200, Christoph Hellwig wrote: > Well, if IP22 doesn't speculate (which I'm pretty sure is the case), > dma_sync_single_for_cpu should indeeed be a no-op. But then there > also shouldn't be anything in the cache, as the previous > dma_sync_single_for_device should have invalidated it. So it seems like > we are missing one (or more) ownership transfers to the device. I'll > try to look at the the owner...
2014 May 19
0
[PATCH 2/4] drm/ttm: introduce dma cache sync helpers
..._device(dev, ttm_dma->dma_address[i], + PAGE_SIZE, DMA_TO_DEVICE); + } +} +EXPORT_SYMBOL(ttm_dma_tt_cache_sync_for_device); + +void ttm_dma_tt_cache_sync_for_cpu(struct ttm_dma_tt *ttm_dma, + struct device *dev) +{ + int i; + + for (i = 0; i < ttm_dma->ttm.num_pages; i++) { + dma_sync_single_for_cpu(dev, ttm_dma->dma_address[i], + PAGE_SIZE, DMA_FROM_DEVICE); + } +} +EXPORT_SYMBOL(ttm_dma_tt_cache_sync_for_cpu); + void ttm_tt_unbind(struct ttm_tt *ttm) { int ret; diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index a5183da3ef92..52fb709568fc 100644 --...
2020 Sep 15
0
[PATCH 12/18] sgiseeq: convert to dma_alloc_noncoherent
...et/seeq/sgiseeq.c @@ -112,14 +112,18 @@ struct sgiseeq_private { static inline void dma_sync_desc_cpu(struct net_device *dev, void *addr) { - dma_cache_sync(dev->dev.parent, addr, sizeof(struct sgiseeq_rx_desc), - DMA_FROM_DEVICE); + struct sgiseeq_private *sp = netdev_priv(dev); + + dma_sync_single_for_cpu(dev->dev.parent, VIRT_TO_DMA(sp, addr), + sizeof(struct sgiseeq_rx_desc), DMA_BIDIRECTIONAL); } static inline void dma_sync_desc_dev(struct net_device *dev, void *addr) { - dma_cache_sync(dev->dev.parent, addr, sizeof(struct sgiseeq_rx_desc), - DMA_TO_DEVICE); + struct sgiseeq_...
2014 Jun 24
0
[PATCH v2 2/3] drm/ttm: introduce dma cache sync helpers
...v, ttm_dma->dma_address[i], + PAGE_SIZE, DMA_TO_DEVICE); + } +} +EXPORT_SYMBOL(ttm_dma_tt_cache_sync_for_device); + +void ttm_dma_tt_cache_sync_for_cpu(struct ttm_dma_tt *ttm_dma, + struct device *dev) +{ + unsigned long i; + + for (i = 0; i < ttm_dma->ttm.num_pages; i++) { + dma_sync_single_for_cpu(dev, ttm_dma->dma_address[i], + PAGE_SIZE, DMA_FROM_DEVICE); + } +} +EXPORT_SYMBOL(ttm_dma_tt_cache_sync_for_cpu); + void ttm_tt_unbind(struct ttm_tt *ttm) { int ret; diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index a5183da3ef92..52fb709568fc 100644 --...
2020 Sep 02
1
[PATCH 22/28] sgiseeq: convert from dma_cache_sync to dma_sync_single_for_device
On Tue, Sep 01, 2020 at 07:38:10PM +0200, Thomas Bogendoerfer wrote: > On Tue, Sep 01, 2020 at 07:16:27PM +0200, Christoph Hellwig wrote: > > Well, if IP22 doesn't speculate (which I'm pretty sure is the case), > > dma_sync_single_for_cpu should indeeed be a no-op. But then there > > also shouldn't be anything in the cache, as the previous > > dma_sync_single_for_device should have invalidated it. So it seems like > > we are missing one (or more) ownership transfers to the device. I'll > > try to l...
2014 Jul 08
0
[PATCH v4 4/6] drm/nouveau: synchronize BOs when required
...drm->dev); + struct ttm_dma_tt *ttm_dma = (struct ttm_dma_tt *)nvbo->bo.ttm; + int i; + + if (!ttm_dma) + return; + + if (nv_device_is_cpu_coherent(device) || nvbo->force_coherent) + return; + + if (nv_device_is_pci(device)) { + for (i = 0; i < ttm_dma->ttm.num_pages; i++) + pci_dma_sync_single_for_cpu(device->pdev, + ttm_dma->dma_address[i], PAGE_SIZE, + PCI_DMA_FROMDEVICE); + } else { + for (i = 0; i < ttm_dma->ttm.num_pages; i++) + dma_sync_single_for_cpu(nv_device_base(device), + ttm_dma->dma_address[i], PAGE_SIZE, + DMA_FROM_DEVICE); + } +} + int nouveau_bo_va...
2020 Sep 15
0
[PATCH 10/18] hal2: convert to dma_alloc_noncoherent
...urn 0; } @@ -667,7 +661,9 @@ static void hal2_capture_transfer(struct snd_pcm_substream *substream, struct snd_hal2 *hal2 = snd_pcm_substream_chip(substream); unsigned char *buf = hal2->adc.buffer + rec->hw_data; - dma_cache_sync(hal2->card->dev, buf, bytes, DMA_FROM_DEVICE); + dma_sync_single_for_cpu(hal2->card->dev, + hal2->adc.buffer_dma + rec->hw_data, bytes, + DMA_FROM_DEVICE); memcpy(substream->runtime->dma_area + rec->sw_data, buf, bytes); } -- 2.28.0
2020 Sep 14
2
[PATCH 11/17] sgiseeq: convert to dma_alloc_noncoherent
...a_sync_dev(struct net_device *ndev, volatile void *addr, + size_t len) +{ + dma_sync_single_for_device(ndev->dev.parent, + virt_to_dma(netdev_priv(ndev), addr), len, + DMA_BIDIRECTIONAL); +} + +static inline void dma_sync_cpu(struct net_device *ndev, volatile void *addr, + size_t len) +{ + dma_sync_single_for_cpu(ndev->dev.parent, + virt_to_dma(netdev_priv(ndev), addr), len, + DMA_BIDIRECTIONAL); +} +#else +static inline void dma_sync_dev(struct net_device *ndev, volatile void *addr, + size_t len) +{ +} +static inline void dma_sync_cpu(struct net_device *ndev, volatile void *addr, + size_t len) +{...
2014 Jul 10
2
[PATCH v4 4/6] drm/nouveau: synchronize BOs when required
...m_dma_tt *)nvbo->bo.ttm; > + int i; > + > + if (!ttm_dma) > + return; > + > + if (nv_device_is_cpu_coherent(device) || nvbo->force_coherent) > + return; > + > + if (nv_device_is_pci(device)) { > + for (i = 0; i < ttm_dma->ttm.num_pages; i++) > + pci_dma_sync_single_for_cpu(device->pdev, > + ttm_dma->dma_address[i], PAGE_SIZE, > + PCI_DMA_FROMDEVICE); > + } else { > + for (i = 0; i < ttm_dma->ttm.num_pages; i++) > + dma_sync_single_for_cpu(nv_device_base(device), > + ttm_dma->dma_address[i], PAGE_SIZE, > + DMA_FROM_DE...
2014 Jul 08
8
[PATCH v4 0/6] drm: nouveau: memory coherency on ARM
Another revision of this patchset critical for GK20A to operate. Previous attempts were exclusively using either TTM's regular page allocator or the DMA API one. Both have their advantages and drawbacks: the page allocator is fast but requires explicit synchronization on non-coherent architectures, whereas the DMA allocator always returns coherent memory, but is also slower, creates a
2014 Jun 24
4
[PATCH v2 0/3] drm/ttm: nouveau: memory coherency for ARM
For this v2 I have fixed the patches that are non-controversial (all Lucas' :)) and am resubmitting them in the hope that they will get merged. This will just leave the issue of Nouveau system-memory buffers mapping to be solved. This issue is quite complex, so let me summarize the situation and the data I have at hand. ARM caching is like a quantum world where Murphy's law constantly
2014 Oct 27
4
[PATCH v5 0/4] drm: nouveau: memory coherency on ARM
It has been a couple of months since v4 - apologies for this. v4 has not received many comments, but this version addresses them and makes a new attempt at pushing the critical bit for GK20A and Nouveau on ARM in general. As a reminder, this series addresses the memory coherency issue that we are seeing on ARM platforms. Contrary to x86 which invalidates the PCI caches whenever a write is made by
2009 Sep 10
1
[Bug 23830] New: nouvea modules on 2.6.31-rc6 failed
...mmon.h: At top level: include/asm-generic/dma-mapping-common.h:98: warning: 'enum dma_data_direction' declared inside parameter list include/asm-generic/dma-mapping-common.h:98: error: parameter 4 ('dir') has incomplete type include/asm-generic/dma-mapping-common.h: In function 'dma_sync_single_for_cpu': include/asm-generic/dma-mapping-common.h:103: error: dereferencing pointer to incomplete type include/asm-generic/dma-mapping-common.h:104: error: dereferencing pointer to incomplete type include/asm-generic/dma-mapping-common.h: At top level: include/asm-generic/dma-mapping-common.h:111: war...
2020 Aug 19
0
[PATCH 23/28] lib82596: convert from dma_cache_sync to dma_sync_single_for_device
...a_sync_dev(struct net_device *ndev, volatile void *addr, + size_t len) +{ + dma_sync_single_for_device(ndev->dev.parent, + virt_to_dma(netdev_priv(ndev), addr), len, + DMA_BIDIRECTIONAL); +} + +static inline void dma_sync_cpu(struct net_device *ndev, volatile void *addr, + size_t len) +{ + dma_sync_single_for_cpu(ndev->dev.parent, + virt_to_dma(netdev_priv(ndev), addr), len, + DMA_BIDIRECTIONAL); +} +#else +static inline void dma_sync_dev(struct net_device *ndev, volatile void *addr, + size_t len) +{ +} +static inline void dma_sync_cpu(struct net_device *ndev, volatile void *addr, + size_t len) +{...
2009 Sep 02
20
Re: i686 vs i586 glibc segfault issue on 64-bit AMD Xen paravirt guests
On 09/02/09 01:10, Mitchell E Berger wrote: > I apologize for writing to you directly instead of through an officially > supported channel. No problem. > I''ve filed a bug against glibc in Redhat''s Bugzilla > for an issue that only seems to surface on 64-bit Xen paravirt guests > on AMD hosts. Filing this bug with the distro involved seemed to make > sense,