search for: ttm_pl_flag_tt

Displaying 20 results from an estimated 59 matches for "ttm_pl_flag_tt".

2010 Mar 18
0
[PATCH] drm/nouveau: Make use of TTM busy_placements.
...26 +172,33 @@ nouveau_bo_new(struct drm_device *dev, struct nouveau_channel *chan, return 0; } +static void +set_placement_list(uint32_t *pl, unsigned *n, uint32_t type, uint32_t flags) +{ + *n = 0; + + if (type & TTM_PL_FLAG_VRAM) + pl[(*n)++] = TTM_PL_FLAG_VRAM | flags; + if (type & TTM_PL_FLAG_TT) + pl[(*n)++] = TTM_PL_FLAG_TT | flags; + if (type & TTM_PL_FLAG_SYSTEM) + pl[(*n)++] = TTM_PL_FLAG_SYSTEM | flags; +} + void -nouveau_bo_placement_set(struct nouveau_bo *nvbo, uint32_t memtype) +nouveau_bo_placement_set(struct nouveau_bo *nvbo, uint32_t type, uint32_t busy) { - int n = 0;...
2015 Apr 17
3
[PATCH 4/6] drm: enable big page mapping for small pages when IOMMU is available
...ouveau/nouveau_bo.c > index 77326e344dad..da76ee1121e4 100644 > --- a/drm/nouveau/nouveau_bo.c > +++ b/drm/nouveau/nouveau_bo.c > @@ -221,6 +221,11 @@ nouveau_bo_new(struct drm_device *dev, int size, int align, > if (drm->client.vm) { > if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024) > nvbo->page_shift = drm->client.vm->mmu->lpg_shift; > + > + if ((flags & TTM_PL_FLAG_TT) && > + drm->client.vm->mmu->iommu_capable && > +...
2017 Mar 29
2
[PATCH 2/6] drm/nouveau: Pin bos from imported dma-bufs to GTT.
...au/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -76,6 +76,8 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev, return ERR_PTR(ret); nvbo->valid_domains = NOUVEAU_GEM_DOMAIN_GART; + /* pin imported buffer to GTT */ + nouveau_bo_pin(nvbo, TTM_PL_FLAG_TT, false); /* Initialize the embedded gem-object. We return a single gem-reference * to the caller, instead of a normal nouveau_bo ttm reference. */ -- 2.11.0
2014 Oct 27
4
[PATCH v5 0/4] drm: nouveau: memory coherency on ARM
It has been a couple of months since v4 - apologies for this. v4 has not received many comments, but this version addresses them and makes a new attempt at pushing the critical bit for GK20A and Nouveau on ARM in general. As a reminder, this series addresses the memory coherency issue that we are seeing on ARM platforms. Contrary to x86 which invalidates the PCI caches whenever a write is made by
2019 Mar 18
1
[PATCH v3 2/5] drm/virtio: use struct to pass params to virtio_gpu_object_create()
...or virtio_gpu_alloc_object(), it is unused and always false. Also drop "pinned" parameter. virtio-gpu doesn't shuffle around objects, so effecively they all are pinned anyway. Hardcode TTM_PL_FLAG_NO_EVICT so ttm knows. Doesn't change much for the moment as virtio-gpu supports TTM_PL_FLAG_TT only so there is no opportunity to move around objects. That'll probably change in the future though. Signed-off-by: Gerd Hoffmann <kraxel at redhat.com> --- drivers/gpu/drm/virtio/virtgpu_drv.h | 14 +++++++----- drivers/gpu/drm/virtio/virtgpu_gem.c | 16 ++++++++------ drivers/...
2010 Mar 06
0
[PATCH] drm/nouveau: Never evict VRAM buffers to system.
...x 028719f..0266124 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -439,8 +439,7 @@ nouveau_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) switch (bo->mem.mem_type) { case TTM_PL_VRAM: - nouveau_bo_placement_set(nvbo, TTM_PL_FLAG_TT | - TTM_PL_FLAG_SYSTEM); + nouveau_bo_placement_set(nvbo, TTM_PL_FLAG_TT); break; default: nouveau_bo_placement_set(nvbo, TTM_PL_FLAG_SYSTEM); -- 1.6.4.4
2018 Dec 19
0
[PATCH 04/10] drm/virtio: move virtio_gpu_object_{attach, detach} calls.
Drop the dummy ttm backend implementation, add a real one for TTM_PL_FLAG_TT objects. The bin/unbind callbacks will call virtio_gpu_object_{attach,detach}, to update the object state on the host side, instead of invoking those calls from the move_notify() callback. With that in place the move and move_notify callbacks are not needed any more, so drop them. Signed-off-by:...
2019 Mar 18
0
[PATCH v3 1/5] drm/virtio: move virtio_gpu_object_{attach, detach} calls.
Drop the dummy ttm backend implementation, add a real one for TTM_PL_FLAG_TT objects. The bin/unbind callbacks will call virtio_gpu_object_{attach,detach}, to update the object state on the host side, instead of invoking those calls from the move_notify() callback. With that in place the move and move_notify callbacks are not needed any more, so drop them. Signed-off-by:...
2014 Jul 08
8
[PATCH v4 0/6] drm: nouveau: memory coherency on ARM
Another revision of this patchset critical for GK20A to operate. Previous attempts were exclusively using either TTM's regular page allocator or the DMA API one. Both have their advantages and drawbacks: the page allocator is fast but requires explicit synchronization on non-coherent architectures, whereas the DMA allocator always returns coherent memory, but is also slower, creates a
2015 Jan 24
1
[PATCH 1/6] make RAM device optional
...), GFP_KERNEL); > @@ -231,10 +232,12 @@ nv84_fence_create(struct nouveau_drm *drm) > priv->base.context_base = fence_context_alloc(priv->base.contexts); > priv->base.uevent = true; > > + domain = nvxx_fb(&drm->device)->ram ? TTM_PL_FLAG_VRAM : TTM_PL_FLAG_TT; > ret = nouveau_bo_new(drm->dev, 16 * priv->base.contexts, 0, > - TTM_PL_FLAG_VRAM, 0, 0, NULL, NULL, &priv->bo); > + domain | TTM_PL_FLAG_UNCACHED, And the TTM_PL_FLAG_UNCACHED here. Are those intentional, I don&...
2010 Mar 10
34
[Patch RFC] nouveau accelerated on Xen pv-ops kernel
...000000 +0530 @@ -271,7 +271,10 @@ */ vma->vm_private_data = bo; - vma->vm_flags |= VM_RESERVED | VM_IO | VM_MIXEDMAP | VM_DONTEXPAND; + vma->vm_flags |= VM_RESERVED | VM_MIXEDMAP | VM_DONTEXPAND; + if (!((bo->mem.placement & TTM_PL_MASK_MEM) & TTM_PL_FLAG_TT)) + vma->vm_flags |= VM_IO; + vma->vm_page_prot = vma_get_vm_prot(vma->vm_flags); return 0; out_unref: ttm_bo_unref(&bo); This patch is necessary because, in Xen, PFN of a page is virtualised. So physical addresses for DMA programming needs to use...
2010 Mar 10
34
[Patch RFC] nouveau accelerated on Xen pv-ops kernel
...000000 +0530 @@ -271,7 +271,10 @@ */ vma->vm_private_data = bo; - vma->vm_flags |= VM_RESERVED | VM_IO | VM_MIXEDMAP | VM_DONTEXPAND; + vma->vm_flags |= VM_RESERVED | VM_MIXEDMAP | VM_DONTEXPAND; + if (!((bo->mem.placement & TTM_PL_MASK_MEM) & TTM_PL_FLAG_TT)) + vma->vm_flags |= VM_IO; + vma->vm_page_prot = vma_get_vm_prot(vma->vm_flags); return 0; out_unref: ttm_bo_unref(&bo); This patch is necessary because, in Xen, PFN of a page is virtualised. So physical addresses for DMA programming needs to use...
2009 Aug 19
1
[PATCH] drm/nouveau: Add a MM for mappable VRAM that isn't usable as scanout.
...bo->mem.mem_type == TTM_PL_PRIV0 || + bo->mem.mem_type == TTM_PL_PRIV1)) flags = TTM_PL_FLAG_VRAM; else if ((valid_domains & NOUVEAU_GEM_DOMAIN_GART) && @@ -221,8 +222,11 @@ nouveau_gem_set_domain(struct drm_gem_object *gem, uint32_t read_domains, flags = TTM_PL_FLAG_TT; } - if ((flags & TTM_PL_FLAG_VRAM) && !nvbo->mappable) - flags |= TTM_PL_FLAG_PRIV0; + if (flags & TTM_PL_FLAG_VRAM) { + flags |= TTM_PL_FLAG_PRIV1; + if (!nvbo->mappable) + flags |= TTM_PL_FLAG_PRIV0; + } bo->proposed_placement &= ~TTM_PL_MASK_MEM; bo-&...
2013 Nov 12
0
[PATCH 6/7] drm/nouveau: more paranoia in nouveau_bo_fixup_align
...up(*size, 64 * nvbo->tile_mode); + } else { + *align = 16384; + *size = roundup(*size, 32 * nvbo->tile_mode); } } else { *size = roundup(*size, (1 << nvbo->page_shift)); @@ -228,8 +224,14 @@ nouveau_bo_new(struct drm_device *dev, int size, int align, if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024) nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift; } - nouveau_bo_fixup_align(nvbo, flags, &align, &size); + if (size <= 0) { + nv_warn(drm, "invalid size %x after setting alignment %x\n", + size, align); + kfree(nv...
2012 Nov 22
0
[resend PATCH] drm/nouveau: unpin buffers before releasing to prevent lockdep warnings
...map_atomic(struct dma_buf *dma_buf, unsigned long page_num) @@ -175,13 +176,17 @@ struct dma_buf *nouveau_gem_prime_export(struct drm_device *dev, { struct nouveau_bo *nvbo = nouveau_gem_object(obj); int ret = 0; + struct dma_buf *buf; /* pin buffer into GTT */ ret = nouveau_bo_pin(nvbo, TTM_PL_FLAG_TT); if (ret) return ERR_PTR(-EINVAL); - return dma_buf_export(nvbo, &nouveau_dmabuf_ops, obj->size, flags); + buf = dma_buf_export(nvbo, &nouveau_dmabuf_ops, obj->size, flags); + if (IS_ERR(buf)) + nouveau_bo_unpin(nvbo); + return buf; } struct drm_gem_object *nouveau_gem_pr...
2012 Oct 12
0
[PATCH 3/3, resend with fixed to field] drm/nouveau: unpin buffers before releasing to prevent lockdep warnings
...map_atomic(struct dma_buf *dma_buf, unsigned long page_num) @@ -175,13 +176,17 @@ struct dma_buf *nouveau_gem_prime_export(struct drm_device *dev, { struct nouveau_bo *nvbo = nouveau_gem_object(obj); int ret = 0; + struct dma_buf *buf; /* pin buffer into GTT */ ret = nouveau_bo_pin(nvbo, TTM_PL_FLAG_TT); if (ret) return ERR_PTR(-EINVAL); - return dma_buf_export(nvbo, &nouveau_dmabuf_ops, obj->size, flags); + buf = dma_buf_export(nvbo, &nouveau_dmabuf_ops, obj->size, flags); + if (IS_ERR(buf)) + nouveau_bo_unpin(nvbo); + return buf; } struct drm_gem_object *nouveau_gem_pr...
2019 Sep 10
1
[Intel-gfx] [PATCH v6 08/17] drm/ttm: use gem vma_node
..._table(struct drm_device *dev, struct nouveau_drm *drm = nouveau_drm(dev); struct nouveau_bo *nvbo; struct dma_resv *robj = attach->dmabuf->resv; - size_t size = attach->dmabuf->size; + u64 size = attach->dmabuf->size; u32 flags = 0; + int align = 0; int ret; flags = TTM_PL_FLAG_TT; dma_resv_lock(robj, NULL); - nvbo = nouveau_bo_alloc(&drm->client, size, flags, 0, 0); + nvbo = nouveau_bo_alloc(&drm->client, &size, &align, flags, 0, 0); dma_resv_unlock(robj); if (IS_ERR(nvbo)) return ERR_CAST(nvbo); @@ -84,7 +85,7 @@ struct drm_gem_object *nouv...
2018 Jan 11
5
[PATCH 1/5] drm/prime: Remove duplicate forward declaration
From: Thierry Reding <treding at nvidia.com> struct device is forward-declared twice. Remove the second instance. Reviewed-by: Chris Wilson <chris at chris-wilson.co.uk> Signed-off-by: Thierry Reding <treding at nvidia.com> --- include/drm/drm_prime.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h index
2013 Aug 11
2
Fixing nouveau for >4k PAGE_SIZE
...m/nouveau/nouveau_bo.c index af20fba..694024d 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -226,7 +226,7 @@ nouveau_bo_new(struct drm_device *dev, int size, int align, nvbo->page_shift = 12; if (drm->client.base.vm) { if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024) - nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift; + nvbo->page_shift = lpg_shift; } nouveau_bo_fixup_align(nvbo, flags, &align, &size); diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_s...
2013 Nov 29
2
Fixing nouveau for >4k PAGE_SIZE
...n = max((1 << nvbo->page_shift), *align); } - + *align = roundup(*align, PAGE_SIZE); *size = roundup(*size, PAGE_SIZE); } @@ -221,7 +221,7 @@ nouveau_bo_new(struct drm_device *dev, int size, int align, nvbo->page_shift = 12; if (drm->client.base.vm) { if (!(flags & TTM_PL_FLAG_TT) && size > 256 * 1024) - nvbo->page_shift = drm->client.base.vm->vmm->lpg_shift; + nvbo->page_shift = lpg_shift; } nouveau_bo_fixup_align(nvbo, flags, &align, &size); diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_s...