Displaying 8 results from an estimated 8 matches for "dma_resv_lock".
Did you mean:
dma_resv_unlock
2023 Jul 06
0
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...the fence
signalling critical path and we're not allowed to hold the dma-resv lock
there. Hence, as we discussed I added the option for drivers to provide
an external lock for that, just to be able to keep some lockdep checks.
>
> What I have right now is something like:
>
> dma_resv_lock(vm->resv);
>
> // split done in drm_gpuva_sm_map(), each iteration
> // of the loop is a call to the driver ->[re,un]map()
> // hook
> for_each_sub_op() {
>
> // Private BOs have their resv field pointing to the
> // VM resv and we take the VM resv lock bef...
2023 Feb 17
0
[PATCH v10 09/11] drm/gem: Add drm_gem_pin_unlocked()
...gem.c
> @@ -1167,6 +1167,35 @@ void drm_gem_unpin(struct drm_gem_object *obj)
> obj->funcs->unpin(obj);
> }
>
> +int drm_gem_pin_unlocked(struct drm_gem_object *obj)
> +{
> + int ret;
> +
> + if (!obj->funcs->pin)
> + return 0;
> +
> + ret = dma_resv_lock_interruptible(obj->resv, NULL);
> + if (ret)
> + return ret;
> +
> + ret = obj->funcs->pin(obj);
> + dma_resv_unlock(obj->resv);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL(drm_gem_pin_unlocked);
> +
> +void drm_gem_unpin_unlocked(struct drm_gem_object *o...
2023 Mar 26
0
[PATCH v13 01/10] drm/shmem-helper: Switch to reservation lock
...Osipenko:
> On 3/15/23 16:46, Dmitry Osipenko wrote:
>> On 3/14/23 05:26, Dmitry Osipenko wrote:
>>> @@ -633,7 +605,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
>>> return ret;
>>> }
>>>
>>> + dma_resv_lock(shmem->base.resv, NULL);
>>> ret = drm_gem_shmem_get_pages(shmem);
>>> + dma_resv_unlock(shmem->base.resv);
>> Intel CI reported locking problem [1] here. It actually was also
>> reported for v12, but I missed that report because of the other noisy
>> re...
2020 May 15
0
[PATCH v3 1/4] dma-buf: add support for virtio exported objects
...lback to fetch the UUID.
Hm ok. I guess if we go with the older patch, where this all is a lot more
just code in virtio, doing an extra function to allocate the uuid sounds
fine. Then synchronization is entirely up to the virtio subsystem and not
a dma-buf problem (and hence not mine). You can use dma_resv_lock or so,
but no need to. But with callbacks potentially going both ways things
always get a bit interesting wrt locking - this is what makes peer2peer
dma-buf so painful right now. Hence I'd like to avoid that if needed, at
least at the dma-buf level. virtio code I don't mind what you do ther...
2019 Sep 16
4
[PATCH 0/4] drm/nouveau: Miscellaneous fixes
From: Thierry Reding <treding at nvidia.com>
Hi Ben,
these are fixes for a couple of issues that I've been running into when
testing on various Tegra boards. The first two patches fix up issues in
the fix that I had sent out earlier to fix the regression introduced in
drm-misc-next. The first one is critical because it avoids a BUG_ON as
reported by Ilia, while the second is less
2019 Sep 10
1
[Intel-gfx] [PATCH v6 08/17] drm/ttm: use gem vma_node
...vice *dev,
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_bo *nvbo;
struct dma_resv *robj = attach->dmabuf->resv;
- size_t size = attach->dmabuf->size;
+ u64 size = attach->dmabuf->size;
u32 flags = 0;
+ int align = 0;
int ret;
flags = TTM_PL_FLAG_TT;
dma_resv_lock(robj, NULL);
- nvbo = nouveau_bo_alloc(&drm->client, size, flags, 0, 0);
+ nvbo = nouveau_bo_alloc(&drm->client, &size, &align, flags, 0, 0);
dma_resv_unlock(robj);
if (IS_ERR(nvbo))
return ERR_CAST(nvbo);
@@ -84,7 +85,7 @@ struct drm_gem_object *nouveau_gem_prime_impor...
2019 Aug 21
2
[Intel-gfx] [PATCH v6 08/17] drm/ttm: use gem vma_node
On Wed, Aug 21, 2019 at 04:33:58PM +1000, Ben Skeggs wrote:
> On Wed, 14 Aug 2019 at 20:14, Gerd Hoffmann <kraxel at redhat.com> wrote:
> >
> > Hi,
> >
> > > > Changing the order doesn't look hard. Patch attached (untested, have no
> > > > test hardware). But maybe I missed some detail ...
> > >
> > > I came up with
2023 Aug 20
3
[PATCH drm-misc-next 0/3] [RFC] DRM GPUVA Manager GPU-VM features
So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.
However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM