search for: drm_nouveau_gem_pushbuf_push

Displaying 12 results from an estimated 12 matches for "drm_nouveau_gem_pushbuf_push".

2023 Aug 23
1
[PATCH drm-misc-next v2] drm/nouveau: uapi: don't pass NO_PREFETCH flag implicitly
Currently, NO_PREFETCH is passed implicitly through drm_nouveau_gem_pushbuf_push::length and drm_nouveau_exec_push::va_len. Since this is a direct representation of how the HW is programmed it isn't really future proof for a uAPI. Hence, fix this up for the new uAPI and split up the va_len field of struct drm_nouveau_exec_push, such that we keep 32bit for va_len and 32bit...
2023 Aug 23
1
[PATCH drm-misc-next] drm/nouveau: uapi: don't pass NO_PREFETCH flag implicitly
On Tue, Aug 22, 2023 at 6:41?PM Danilo Krummrich <dakr at redhat.com> wrote: > Currently, NO_PREFETCH is passed implicitly through > drm_nouveau_gem_pushbuf_push::length and drm_nouveau_exec_push::va_len. > > Since this is a direct representation of how the HW is programmed it > isn't really future proof for a uAPI. Hence, fix this up for the new > uAPI and split up the va_len field of struct drm_nouveau_exec_push, > such that we keep 32b...
2023 Aug 22
2
[PATCH drm-misc-next] drm/nouveau: uapi: don't pass NO_PREFETCH flag implicitly
Currently, NO_PREFETCH is passed implicitly through drm_nouveau_gem_pushbuf_push::length and drm_nouveau_exec_push::va_len. Since this is a direct representation of how the HW is programmed it isn't really future proof for a uAPI. Hence, fix this up for the new uAPI and split up the va_len field of struct drm_nouveau_exec_push, such that we keep 32bit for va_len and 32bit...
2019 Aug 21
2
[PATCH 2/3] drm/nouveau: slowpath for pushbuf ioctl
...elocs; i++) { struct drm_nouveau_gem_pushbuf_reloc *r = &reloc[i]; struct drm_nouveau_gem_pushbuf_bo *b; @@ -691,11 +680,13 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, struct nouveau_drm *drm = nouveau_drm(dev); struct drm_nouveau_gem_pushbuf *req = data; struct drm_nouveau_gem_pushbuf_push *push; + struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; struct drm_nouveau_gem_pushbuf_bo *bo; struct nouveau_channel *chan = NULL; struct validate_op op; struct nouveau_fence *fence = NULL; - int i, j, ret = 0, do_reloc = 0; + int i, j, ret = 0; + bool do_reloc = false; if (unlike...
2019 Oct 21
1
[PATCH 2/3] drm/nouveau: slowpath for pushbuf ioctl
...elocs; i++) { struct drm_nouveau_gem_pushbuf_reloc *r = &reloc[i]; struct drm_nouveau_gem_pushbuf_bo *b; @@ -693,11 +682,13 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, struct nouveau_drm *drm = nouveau_drm(dev); struct drm_nouveau_gem_pushbuf *req = data; struct drm_nouveau_gem_pushbuf_push *push; + struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; struct drm_nouveau_gem_pushbuf_bo *bo; struct nouveau_channel *chan = NULL; struct validate_op op; struct nouveau_fence *fence = NULL; - int i, j, ret = 0, do_reloc = 0; + int i, j, ret = 0; + bool do_reloc = false; if (unlike...
2019 Nov 04
2
[PATCH 2/3] drm/nouveau: slowpath for pushbuf ioctl
...elocs; i++) { struct drm_nouveau_gem_pushbuf_reloc *r = &reloc[i]; struct drm_nouveau_gem_pushbuf_bo *b; @@ -693,11 +682,13 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, struct nouveau_drm *drm = nouveau_drm(dev); struct drm_nouveau_gem_pushbuf *req = data; struct drm_nouveau_gem_pushbuf_push *push; + struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; struct drm_nouveau_gem_pushbuf_bo *bo; struct nouveau_channel *chan = NULL; struct validate_op op; struct nouveau_fence *fence = NULL; - int i, j, ret = 0, do_reloc = 0; + int i, j, ret = 0; + bool do_reloc = false; if (unlike...
2020 Aug 28
8
[PATCH 0/6] drm/nouveau: Support sync FDs and sync objects
From: Thierry Reding <treding at nvidia.com> Hi, This series implements a new IOCTL to submit push buffers that can optionally return a sync FD or sync object to userspace. This is useful in cases where userspace wants to synchronize operations between the GPU and another driver (such as KMS for display). Among other things this allows extensions such as eglDupNativeFenceFDANDROID to be
2019 Aug 20
0
[PATCH 2/3] drm/nouveau: slowpath for pushbuf ioctl
...elocs; i++) { struct drm_nouveau_gem_pushbuf_reloc *r = &reloc[i]; struct drm_nouveau_gem_pushbuf_bo *b; @@ -691,11 +680,13 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, struct nouveau_drm *drm = nouveau_drm(dev); struct drm_nouveau_gem_pushbuf *req = data; struct drm_nouveau_gem_pushbuf_push *push; + struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; struct drm_nouveau_gem_pushbuf_bo *bo; struct nouveau_channel *chan = NULL; struct validate_op op; struct nouveau_fence *fence = NULL; - int i, j, ret = 0, do_reloc = 0; + int i, j, ret = 0; + bool do_reloc = false; if (unlike...
2019 Sep 03
0
[PATCH 2/3] drm/nouveau: slowpath for pushbuf ioctl
...rm_nouveau_gem_pushbuf_reloc *r = &reloc[i]; > struct drm_nouveau_gem_pushbuf_bo *b; > @@ -691,11 +680,13 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, > struct nouveau_drm *drm = nouveau_drm(dev); > struct drm_nouveau_gem_pushbuf *req = data; > struct drm_nouveau_gem_pushbuf_push *push; > + struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; > struct drm_nouveau_gem_pushbuf_bo *bo; > struct nouveau_channel *chan = NULL; > struct validate_op op; > struct nouveau_fence *fence = NULL; > - int i, j, ret = 0, do_reloc = 0; > + int i, j, ret = 0; >...
2019 Nov 05
0
[PATCH 2/3] drm/nouveau: slowpath for pushbuf ioctl
...rm_nouveau_gem_pushbuf_reloc *r = &reloc[i]; > struct drm_nouveau_gem_pushbuf_bo *b; > @@ -693,11 +682,13 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, > struct nouveau_drm *drm = nouveau_drm(dev); > struct drm_nouveau_gem_pushbuf *req = data; > struct drm_nouveau_gem_pushbuf_push *push; > + struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; > struct drm_nouveau_gem_pushbuf_bo *bo; > struct nouveau_channel *chan = NULL; > struct validate_op op; > struct nouveau_fence *fence = NULL; > - int i, j, ret = 0, do_reloc = 0; > + int i, j, ret = 0; >...
2018 Jan 11
3
[PATCH 0/3] drm/nouveau: Add support for fence FDs
From: Thierry Reding <treding at nvidia.com> This small series of patches implements support for waiting on and emitting fence FDs on kickoff. This enables explicit fencing and can be used for example to synchronize buffer accesses between the display engine and the GPU on Tegra. The first patch lays the groundwork by splitting up nouveau_fence_sync() to allow reuse. Patch 2 is where the
2014 Sep 26
14
[RFC] Explicit synchronization for Nouveau
Hi guys, I'd like to start a new thread about explicit fence synchronization. This time with a Nouveau twist. :-) First, let me define what I understand by implicit/explicit sync: Implicit synchronization * Fences are attached to buffers * Kernel manages fences automatically based on buffer read/write access Explicit synchronization * Fences are passed around independently * Kernel takes