search for: dma_alloc_

Displaying 6 results from an estimated 6 matches for "dma_alloc_".

Did you mean: dma_alloca
2015 Nov 12
2
[PATCH] drm/nouveau: Fix pre-nv50 pageflip events (v3) -> v4
...some time. I /think/ this happens because memory is allocated from the non-DMA pool (i.e. using alloc_page()) and then ends up getting run through the dma_sync_*() API for cache maintenance. But the assumption is that you can only do cache maintenance by the dma_sync_*() API on memory allocated by dma_alloc_*(), hence the warning. There was some discussion about this a while ago, and there was some conclusion that an API was needed to do cache maintenance on non-DMA- allocated pages of memory, but I don't think any work happened towards that API. Adding Alex and Arnd who had been part of that dis...
2019 May 08
3
[PATCH 06/10] s390/cio: add basic protected virtualization support
...evice_private), > + GFP_KERNEL | GFP_DMA); Do we still need GFP_DMA here (since we now have cdev->private->dma_area)? > @@ -1062,6 +1082,14 @@ static int io_subchannel_probe(struct subchannel *sch) > if (!io_priv) > goto out_schedule; > > + io_priv->dma_area = dma_alloc_coherent(&sch->dev, > + sizeof(*io_priv->dma_area), > + &io_priv->dma_area_dma, GFP_KERNEL); This needs GFP_DMA. You use a genpool for ccw_private->dma and not for iopriv->dma - looks kinda inconsistent.
2019 May 08
3
[PATCH 06/10] s390/cio: add basic protected virtualization support
...evice_private), > + GFP_KERNEL | GFP_DMA); Do we still need GFP_DMA here (since we now have cdev->private->dma_area)? > @@ -1062,6 +1082,14 @@ static int io_subchannel_probe(struct subchannel *sch) > if (!io_priv) > goto out_schedule; > > + io_priv->dma_area = dma_alloc_coherent(&sch->dev, > + sizeof(*io_priv->dma_area), > + &io_priv->dma_area_dma, GFP_KERNEL); This needs GFP_DMA. You use a genpool for ccw_private->dma and not for iopriv->dma - looks kinda inconsistent.
2014 May 27
0
[RFC] drm/nouveau: disable caching for VRAM BOs on ARM
...hed mapping, but > in fact it is. The CPUs prefetcher can still access this mapping. Why would this memory be mapped into the kernel? AFAICT Nouveau only maps fences and (somehow) PBs into the kernel. Other BOs are not mapped unless I missed something. Or are you talking about VRAM allocated by dma_alloc_*()? We prevent this from happening by using the CMA allocator (which doesn't create a kmap) directly, which has its own problems (cannot compile Nouveau as a module and use these allocators). In the future we plan to use the iommu to present sparse memory pages in a way the GPU likes.
2014 May 26
2
[RFC] drm/nouveau: disable caching for VRAM BOs on ARM
Am Montag, den 26.05.2014, 09:45 +0300 schrieb Terje Bergstr?m: > On 23.05.2014 17:40, Alex Courbot wrote: > > On 05/23/2014 06:59 PM, Lucas Stach wrote: > > So after checking with more knowledgeable people, it turns out this is > > the expected behavior on ARM and BAR regions should be mapped uncached > > on GK20A. All the more reasons to avoid using the BAR at all.
2015 Nov 10
2
[PATCH] drm/nouveau: Fix pre-nv50 pageflip events (v3)
On 11/10/2015 05:00 PM, Thierry Reding wrote: > On Tue, Nov 10, 2015 at 03:54:52PM +0100, Mario Kleiner wrote: >> From: Daniel Vetter <daniel.vetter at ffwll.ch> >> >> Apparently pre-nv50 pageflip events happen before the actual vblank >> period. Therefore that functionality got semi-disabled in >> >> commit af4870e406126b7ac0ae7c7ce5751f25ebe60f28