search for: dma_alloc_attr

Displaying 20 results from an estimated 60 matches for "dma_alloc_attr".

Did you mean: dma_alloc_attrs
2020 Sep 15
0
[PATCH 08/18] dma-mapping: add a new dma_alloc_noncoherent API
...is guaranteed to be addressable by a device, but which potentially is not cache coherent for DMA. To transfer ownership to and from the device, the existing streaming DMA API calls dma_sync_single_for_device and dma_sync_single_for_cpu must be used. For now the new calls are implemented on top of dma_alloc_attrs just like the old-noncoherent API, but once all drivers are switched to the new API it will be replaced with a better working implementation that is available on all architectures. Signed-off-by: Christoph Hellwig <hch at lst.de> --- Documentation/core-api/dma-api.rst | 75 ++++++++++++++--...
2020 Aug 19
1
[PATCH 19/28] dma-mapping: replace DMA_ATTR_NON_CONSISTENT with dma_{alloc, free}_pages
...kernel direct mapping" mean from the driver perspective? > > If you don't understand how cache line coherency works between a > processor and an I/O device, you should not be using this part of the > -API at all. > +API. > > :: > > void * > - dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, > - gfp_t flag, unsigned long attrs) > + dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, > + enum dma_data_direction dir, gfp_t gfp) > + > +This routin...
2020 Aug 20
2
[PATCH 05/28] media/v4l2: remove V4L2-FLAG-MEMORY-NON-CONSISTENT
...asked back in time what the plan is for non-coherent >> allocations and it seemed like DMA_ATTR_NON_CONSISTENT and >> dma_sync_*() was supposed to be the right thing to go with. [2] The >> same thread also explains why dma_alloc_pages() isn't suitable for the >> users of dma_alloc_attrs() and DMA_ATTR_NON_CONSISTENT. > > AFAICS even back then Christoph was implying getting rid of NON_CONSISTENT > and *replacing* it with something streaming-API-based - i.e. this series - > not encouraging mixing the existing APIs. It doesn't seem impossible to > implement a r...
2016 Jun 02
0
[RFC v3 02/45] dma-mapping: Use unsigned long for dma_attrs
...ce *dev, struct sg_table *sgt, void *cpu_addr, return dma_common_get_sgtable(dev, sgt, cpu_addr, dma_addr, size); } -#define dma_get_sgtable(d, t, v, h, s) dma_get_sgtable_attrs(d, t, v, h, s, NULL) +#define dma_get_sgtable(d, t, v, h, s) dma_get_sgtable_attrs(d, t, v, h, s, 0) #ifndef arch_dma_alloc_attrs #define arch_dma_alloc_attrs(dev, flag) (true) @@ -356,7 +383,7 @@ dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, void *cpu_addr, static inline void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag, - struct dma_attrs *...
2015 Feb 17
1
[PATCH v3 4/6] instmem/gk20a: use DMA attributes
On Tue, Feb 17, 2015 at 5:48 PM, Alexandre Courbot <acourbot at nvidia.com> wrote: > instmem for GK20A is allocated using dma_alloc_coherent(), which > provides us with a coherent CPU mapping that we never use because > instmem objects are accessed through PRAMIN. Switch to > dma_alloc_attrs() which gives us the option to dismiss that CPU mapping > and free up some CPU virtual space. > > Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> > --- > drm/nouveau/nvkm/subdev/instmem/gk20a.c | 24 ++++++++++++++++++++---- > lib/include/nvif/os.h...
2015 Jan 23
0
[PATCH 4/6] instmem/gk20a: use DMA attributes
instmem for GK20A is allocated using dma_alloc_coherent(), which provides us with a coherent CPU mapping that we never use because instmem objects are accessed through PRAMIN. Switch to dma_alloc_attrs() which gives us the option to dismiss that CPU mapping and free up some CPU virtual space. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/instmem/gk20a.c | 24 ++++++++++++++++++++---- lib/include/nvif/os.h | 31 ++++++++++++++++++++...
2015 Feb 17
0
[PATCH v3 4/6] instmem/gk20a: use DMA attributes
instmem for GK20A is allocated using dma_alloc_coherent(), which provides us with a coherent CPU mapping that we never use because instmem objects are accessed through PRAMIN. Switch to dma_alloc_attrs() which gives us the option to dismiss that CPU mapping and free up some CPU virtual space. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/instmem/gk20a.c | 24 ++++++++++++++++++++---- lib/include/nvif/os.h | 31 ++++++++++++++++++++...
2020 Aug 20
0
[PATCH 05/28] media/v4l2: remove V4L2-FLAG-MEMORY-NON-CONSISTENT
...hat the plan is for non-coherent > >> allocations and it seemed like DMA_ATTR_NON_CONSISTENT and > >> dma_sync_*() was supposed to be the right thing to go with. [2] The > >> same thread also explains why dma_alloc_pages() isn't suitable for the > >> users of dma_alloc_attrs() and DMA_ATTR_NON_CONSISTENT. > > > > AFAICS even back then Christoph was implying getting rid of NON_CONSISTENT > > and *replacing* it with something streaming-API-based - i.e. this series - > > not encouraging mixing the existing APIs. It doesn't seem impossible to &...
2020 Aug 19
0
[PATCH 19/28] dma-mapping: replace DMA_ATTR_NON_CONSISTENT with dma_{alloc, free}_pages
...cate pages that can be used like normal pages +in the kernel direct mapping, but are guaranteed to be DMA addressable. If you don't understand how cache line coherency works between a processor and an I/O device, you should not be using this part of the -API at all. +API. :: void * - dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t flag, unsigned long attrs) + dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, + enum dma_data_direction dir, gfp_t gfp) + +This routine allocates a region of <size> bytes of consistent memory. It +r...
2016 Jun 02
0
[RFC v3 19/45] [media] dma-mapping: Use unsigned long for dma_attrs
...+147,10 @@ int bdisp_hw_alloc_nodes(struct bdisp_ctx *ctx) unsigned int i, node_size = sizeof(struct bdisp_node); void *base; dma_addr_t paddr; - DEFINE_DMA_ATTRS(attrs); /* Allocate all the nodes within a single memory page */ - dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs); base = dma_alloc_attrs(dev, node_size * MAX_NB_NODE, &paddr, - GFP_KERNEL | GFP_DMA, &attrs); + GFP_KERNEL | GFP_DMA, DMA_ATTR_WRITE_COMBINE); if (!base) { dev_err(dev, "%s no mem\n", __func__); return -ENOMEM; @@ -188,13 +183,9 @@ void bdisp_hw_free_filters(struct device *dev...
2020 Aug 19
1
[PATCH 05/28] media/v4l2: remove V4L2-FLAG-MEMORY-NON-CONSISTENT
...d back in time what the plan is for non-coherent > > allocations and it seemed like DMA_ATTR_NON_CONSISTENT and > > dma_sync_*() was supposed to be the right thing to go with. [2] The > > same thread also explains why dma_alloc_pages() isn't suitable for the > > users of dma_alloc_attrs() and DMA_ATTR_NON_CONSISTENT. > > AFAICS even back then Christoph was implying getting rid of > NON_CONSISTENT and *replacing* it with something streaming-API-based - That's not how I read his reply from the thread I pointed to, but that might of course be my misunderstanding. >...
2018 May 10
4
kernel spew from nouveau/ swiotlb
Greetings, When box is earning its keep, nouveau/swiotlb grumble.. a LOT. The below is from master.today. [12594.640959] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12594.693000] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12594.713787] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) [12594.743413] nouveau 0000:01:00.0: swiotlb buffer
2018 May 10
1
kernel spew from nouveau/ swiotlb
On Thu, 2018-05-10 at 10:31 -0400, Jerome Glisse wrote: > > Could you bisect ? I would love to point finger upstream to the DMA > folk who made changes to that API without testing with GPU. Rummaging a bit, it might be... nouveau_bo_new() ... ttm_dma_pool_alloc_new_pages() dma_alloc_attrs() ops->alloc() == x86_swiotlb_alloc_coherent() x86_swiotlb_alloc_coherent() flags |= __GFP_NOWARN; swiotlb_alloc_coherent(..flags) swiotlb_alloc_coherent(..flags) attrs = (flags & __GFP_NOWARN) ? DMA_ATTR_NO_WARN : 0; swiotlb_alloc_buffer(..attr)...
2020 Aug 20
1
[PATCH 05/28] media/v4l2: remove V4L2-FLAG-MEMORY-NON-CONSISTENT
...> what we don't want to have in vb2 and what was actually the job of the > DMA API to hide. Is the plan to actually move the IOMMU handling out > of the DMA API? > > Do you think we could instead turn it into a dma_alloc_noncoherent() > helper, which has similar semantics as dma_alloc_attrs() and handles > the various corner cases (e.g. invalidate_kernel_vmap_range and > flush_kernel_vmap_range) to achieve the desired functionality without > delegating the "hell", as you called it, to the users? Yes, I guess I could do something in that direction. At least for dm...
2015 Mar 10
1
[PATCH] instmem/gk20a: fix crash during error path
If a memory allocation fails when using the DMA allocator, gk20a_instobj_dtor_dma() will be called on the failed instmem object. At this time, node->handle might not be NULL despite the call to dma_alloc_attrs() having failed. node->cpuaddr is the right member to check for such a failure, so use it instead. Reported-by: Vince Hsu <vinceh at nvidia.com> Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/instmem/gk20a.c | 2 +- 1 file changed, 1 insertio...
2020 Sep 15
0
[PATCH 10/18] hal2: convert to dma_alloc_noncoherent
...ec *codec, + enum dma_data_direction buffer_dir) { struct device *dev = hal2->card->dev; struct hal2_desc *desc; @@ -449,15 +450,15 @@ static int hal2_alloc_dmabuf(struct snd_hal2 *hal2, struct hal2_codec *codec) int count = H2_BUF_SIZE / H2_BLOCK_SIZE; int i; - codec->buffer = dma_alloc_attrs(dev, H2_BUF_SIZE, &buffer_dma, - GFP_KERNEL, DMA_ATTR_NON_CONSISTENT); + codec->buffer = dma_alloc_noncoherent(dev, H2_BUF_SIZE, &buffer_dma, + buffer_dir, GFP_KERNEL); if (!codec->buffer) return -ENOMEM; - desc = dma_alloc_attrs(dev, count * sizeof(struct hal2_desc), -...
2019 Apr 09
0
[RFC PATCH 02/12] virtio/s390: DMA support for virtio-ccw
...; > Currently we have a problem if a virtio-ccw device has > > > VIRTIO_F_IOMMU_PLATFORM. > > > > Can you please describe what the actual problem is? > > > > Without this patch: > > WARNING: CPU: 2 PID: 26 > at [..]/kernel/dma/mapping.c:251 > dma_alloc_attrs+0x8e/0xd0 Modules linked in: CPU: 2 PID: 26 Comm: > kworker/u6:1 Not tainted 5.1.0-rc3-00023-g1ec89ec #596 Hardware name: > IBM 2964 NC9 712 (KVM/Linux) Workqueue: events_unbound async_run_entry_fn > Krnl PSW : 0704c00180000000 000000000021b18e (dma_alloc_attrs+0x8e/0xd0) >...
2020 Sep 15
0
[PATCH 06/18] lib82596: move DMA allocation into the callers of i82596_probe
...net_device *netdevice; struct i596_private *lp; - int retval; + int retval = -ENOMEM; int i; if (!dev->irq) { @@ -186,12 +184,22 @@ lan_init_chip(struct parisc_device *dev) lp = netdev_priv(netdevice); lp->options = dev->id.sversion == 0x72 ? OPT_SWAP_PORT : 0; + lp->dma = dma_alloc_attrs(&dev->dev, sizeof(struct i596_dma), + &lp->dma_addr, GFP_KERNEL, + DMA_ATTR_NON_CONSISTENT); + if (!lp->dma) + goto out_free_netdev; retval = i82596_probe(netdevice); - if (retval) { - free_netdev(netdevice); - return -ENODEV; - } + if (retval) + goto out_f...
2020 Aug 19
0
[PATCH 06/28] lib82596: move DMA allocation into the callers of i82596_probe
...net_device *netdevice; struct i596_private *lp; - int retval; + int retval = -ENOMEM; int i; if (!dev->irq) { @@ -186,12 +184,22 @@ lan_init_chip(struct parisc_device *dev) lp = netdev_priv(netdevice); lp->options = dev->id.sversion == 0x72 ? OPT_SWAP_PORT : 0; + lp->dma = dma_alloc_attrs(dev->dev.parent, sizeof(struct i596_dma), + &lp->dma_addr, GFP_KERNEL, + DMA_ATTR_NON_CONSISTENT); + if (!lp->dma) + goto out_free_netdev; retval = i82596_probe(netdevice); - if (retval) { - free_netdev(netdevice); - return -ENODEV; - } + if (retval) + goto out...
2020 Sep 14
20
a saner API for allocating DMA addressable pages v2
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supporte...