search for: dma_resv_unlock

Displaying 20 results from an estimated 22 matches for "dma_resv_unlock".

2020 Sep 09
1
[bug report] drm/nouveau: move io_reserve_lru handling into the driver v5
...vm_get_page_prot(vma->vm_flags); 140 ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT, 1); 141 if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) 142 return ret; ^^^^^^^^^^ Call dma_resv_unlock() before returning? 143 144 nouveau_bo_add_io_reserve_lru(bo); 145 146 dma_resv_unlock(bo->base.resv); 147 148 return ret; 149 } regards, dan carpenter
2023 Feb 17
0
[PATCH v10 09/11] drm/gem: Add drm_gem_pin_unlocked()
...; +int drm_gem_pin_unlocked(struct drm_gem_object *obj) > +{ > + int ret; > + > + if (!obj->funcs->pin) > + return 0; > + > + ret = dma_resv_lock_interruptible(obj->resv, NULL); > + if (ret) > + return ret; > + > + ret = obj->funcs->pin(obj); > + dma_resv_unlock(obj->resv); > + > + return ret; > +} > +EXPORT_SYMBOL(drm_gem_pin_unlocked); > + > +void drm_gem_unpin_unlocked(struct drm_gem_object *obj) > +{ > + if (!obj->funcs->unpin) > + return; > + > + dma_resv_lock(obj->resv, NULL); > + obj->funcs->unp...
2023 Jul 06
0
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...pointing to the > // VM resv and we take the VM resv lock before calling > // drm_gpuva_sm_map() > if (vm->resv != gem->resv) > dma_resv_lock(gem->resv); > > drm_gpuva_[un]link(va); > gem_[un]pin(gem); > > if (vm->resv != gem->resv) > dma_resv_unlock(gem->resv); > } > > dma_resv_unlock(vm->resv); > I'm not sure I get this code right, reading "for_each_sub_op()" and "drm_gpuva_sm_map()" looks a bit like things are mixed up? Or do you mean to represent the sum of all callbacks with "for_each_...
2019 Sep 16
4
[PATCH 0/4] drm/nouveau: Miscellaneous fixes
From: Thierry Reding <treding at nvidia.com> Hi Ben, these are fixes for a couple of issues that I've been running into when testing on various Tegra boards. The first two patches fix up issues in the fix that I had sent out earlier to fix the regression introduced in drm-misc-next. The first one is critical because it avoids a BUG_ON as reported by Ilia, while the second is less
2020 Apr 21
0
[PATCH 1/1] drm/qxl: add mutex_lock/mutex_unlock to ensure the order in which resources are released.
...found that the > linked list was cleared first, and that the lock on the corresponding > ttm Bo for the QXL had not been released, so that the new qxl could not > be locked when it used the TTM. So the dma_resv_reserve_shared() call in qxl_release_validate_bo() is unbalanced? Because the dma_resv_unlock() call in qxl_release_fence_buffer_objects() never happens due to qxl_release_free_list() clearing the list beforehand? Is that correct? The only way I see for this to happen is that the guest is preempted between qxl_push_{cursor,command}_ring_release() and qxl_release_fence_buffer_objects() cal...
2023 Mar 26
0
[PATCH v13 01/10] drm/shmem-helper: Switch to reservation lock
...>> @@ -633,7 +605,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct >>> return ret; >>> } >>> >>> + dma_resv_lock(shmem->base.resv, NULL); >>> ret = drm_gem_shmem_get_pages(shmem); >>> + dma_resv_unlock(shmem->base.resv); >> Intel CI reported locking problem [1] here. It actually was also >> reported for v12, but I missed that report because of the other noisy >> reports. >> >> [1] >> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_114671v2/shard-snb5/igt...
2023 Aug 28
0
[PATCH v15 11/23] dma-resv: Add kref_put_dma_resv()
...+41,7 @@ > > #include <linux/ww_mutex.h> > #include <linux/dma-fence.h> > +#include <linux/kref.h> > #include <linux/slab.h> > #include <linux/seqlock.h> > #include <linux/rcupdate.h> > @@ -464,6 +465,14 @@ static inline void dma_resv_unlock(struct dma_resv *obj) > ww_mutex_unlock(&obj->lock); > } > > +static inline int kref_put_dma_resv(struct kref *kref, > + void (*release)(struct kref *kref), > + struct dma_resv *resv, > + struct ww_acquire_ctx *ctx) > +{ > + return kref...
2019 Sep 10
1
[Intel-gfx] [PATCH v6 08/17] drm/ttm: use gem vma_node
...; + u64 size = attach->dmabuf->size; u32 flags = 0; + int align = 0; int ret; flags = TTM_PL_FLAG_TT; dma_resv_lock(robj, NULL); - nvbo = nouveau_bo_alloc(&drm->client, size, flags, 0, 0); + nvbo = nouveau_bo_alloc(&drm->client, &size, &align, flags, 0, 0); dma_resv_unlock(robj); if (IS_ERR(nvbo)) return ERR_CAST(nvbo); @@ -84,7 +85,7 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev, return ERR_PTR(-ENOMEM); } - ret = nouveau_bo_init(nvbo, size, 0, flags, sg, robj); + ret = nouveau_bo_init(nvbo, size, align, flags, sg, ro...
2020 Jan 24
1
[PATCH 1/2] drm/nouveau: move io_reserve_lru handling into the driver v2
...> + prot = vm_get_page_prot(vma->vm_flags); > + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); > + if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) > + return ret; > + > + nouveau_bo_add_io_reserve_lru(bo); > + > + dma_resv_unlock(bo->base.resv); > + > + return ret; > +} > + > +static struct vm_operations_struct nouveau_ttm_vm_ops = { > + .fault = nouveau_ttm_fault, > + .open = ttm_bo_vm_open, > + .close = ttm_bo_vm_close, > + .access = ttm_bo_vm_access > +}; > + > int > nouveau_...
2020 Jan 28
1
[PATCH 1/2] drm/nouveau: move io_reserve_lru handling into the driver v2
...gt;vm_flags); > + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); > + if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) > + return ret; > + > + nouveau_bo_add_io_reserve_lru(bo); > + > + dma_resv_unlock(bo->base.resv); > + > + return ret; > +} > + > +static struct vm_operations_struct nouveau_ttm_vm_ops = { > + .fault = nouveau_ttm_fault, > + .open = ttm_bo_vm_open, > + .close = ttm_bo_vm_close, > + .access = ttm_bo_vm_access > +}; &gt...
2019 Sep 30
2
[Spice-devel] Xorg indefinitely hangs in kernelspace
> > On 05.09.19 15:34, Jaak Ristioja wrote: > > On 05.09.19 10:14, Gerd Hoffmann wrote: > >> On Tue, Aug 06, 2019 at 09:00:10PM +0300, Jaak Ristioja wrote: > >>> Hello! > >>> > >>> I'm writing to report a crash in the QXL / DRM code in the Linux kernel. > >>> I originally filed the issue on LaunchPad and more details can be
2020 Jan 24
4
TTM/Nouveau cleanups
Hi guys, I've already send this out in September last year, but only got a response from Daniel. Could you guys please test this and tell me what you think about it? Basically I'm trying to remove all driver specific features from TTM which don't need to be inside the framework. Thanks, Christian.
2020 Jan 24
0
[PATCH 1/2] drm/nouveau: move io_reserve_lru handling into the driver v2
...+ + nouveau_bo_del_io_reserve_lru(bo); + + prot = vm_get_page_prot(vma->vm_flags); + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); + if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) + return ret; + + nouveau_bo_add_io_reserve_lru(bo); + + dma_resv_unlock(bo->base.resv); + + return ret; +} + +static struct vm_operations_struct nouveau_ttm_vm_ops = { + .fault = nouveau_ttm_fault, + .open = ttm_bo_vm_open, + .close = ttm_bo_vm_close, + .access = ttm_bo_vm_access +}; + int nouveau_ttm_mmap(struct file *filp, struct vm_area_struct *vma) { struct...
2020 Aug 21
0
[PATCH 2/3] drm/nouveau: move io_reserve_lru handling into the driver v4
...nouveau_bo_del_io_reserve_lru(bo); + + prot = vm_get_page_prot(vma->vm_flags); + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT, 1); + if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) + return ret; + + nouveau_bo_add_io_reserve_lru(bo); + + dma_resv_unlock(bo->base.resv); + + return ret; +} + +static struct vm_operations_struct nouveau_ttm_vm_ops = { + .fault = nouveau_ttm_fault, + .open = ttm_bo_vm_open, + .close = ttm_bo_vm_close, + .access = ttm_bo_vm_access +}; + int nouveau_ttm_mmap(struct file *filp, struct vm_area_struct *vma) { struct...
2019 Oct 09
0
[PATCH 1/2] drm/nouveau: move io_reserve_lru handling into the driver
...> + prot = vm_get_page_prot(vma->vm_flags); > + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); > + if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) > + return ret; > + > + nouveau_bo_add_io_reserve_lru(bo); > + > + dma_resv_unlock(bo->base.resv); > + > + return ret; > +} > + > +static struct vm_operations_struct nouveau_ttm_vm_ops = { > + .fault = nouveau_ttm_fault, > + .open = ttm_bo_vm_open, > + .close = ttm_bo_vm_close, > + .access = ttm_bo_vm_access > +}; > + > int > nouveau_tt...
2019 Aug 21
2
[Intel-gfx] [PATCH v6 08/17] drm/ttm: use gem vma_node
On Wed, Aug 21, 2019 at 04:33:58PM +1000, Ben Skeggs wrote: > On Wed, 14 Aug 2019 at 20:14, Gerd Hoffmann <kraxel at redhat.com> wrote: > > > > Hi, > > > > > > Changing the order doesn't look hard. Patch attached (untested, have no > > > > test hardware). But maybe I missed some detail ... > > > > > > I came up with
2019 Sep 30
3
[PATCH 1/2] drm/nouveau: move io_reserve_lru handling into the driver
...+ + nouveau_bo_del_io_reserve_lru(bo); + + prot = vm_get_page_prot(vma->vm_flags); + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); + if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) + return ret; + + nouveau_bo_add_io_reserve_lru(bo); + + dma_resv_unlock(bo->base.resv); + + return ret; +} + +static struct vm_operations_struct nouveau_ttm_vm_ops = { + .fault = nouveau_ttm_fault, + .open = ttm_bo_vm_open, + .close = ttm_bo_vm_close, + .access = ttm_bo_vm_access +}; + int nouveau_ttm_mmap(struct file *filp, struct vm_area_struct *vma) { struct...
2020 Aug 20
3
Moving LRU handling into Nouveau v2
Hi guys, I already tried this a few month ago, but since I don't have NVidia hardware its rather hard to test for me (need to get some ordered). Dave brought up the topic that we should probably try to move the handling into Nouveau once more, so I tried to fix the problem Ben reported and rebased on top of current drm-misc-next. Dave can you test this? At least in theory the approach
2020 Aug 21
5
Moving LRU handling into Nouveau v3
Hi guys, so I got some hardware and tested this and after hammering out tons of typos it now seems to work fine. Could you give it more testing? Thanks in advance, Christian
2019 Nov 20
2
Move io_reserve_lru handling into the driver
Just a gentle ping on this. Already got the Acked-by from Daniel, but I need some of the nouveau guys to test this since I can only compile test it. Regards, Christian.