search for: num_fenc

Displaying 15 results from an estimated 15 matches for "num_fenc".

Did you mean: num_fences
2014 Sep 26
0
[RFC PATCH 1/7] android: Support creating sync fence from drm fences
...*f, struct fence_cb *cb) wake_up_all(&fence->wq); } -/* TODO: implement a create which takes more that one sync_pt */ -struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt) +struct sync_fence *sync_fence_create(const char *name, + struct fence **fences, int num_fences) { - struct sync_fence *fence; + struct sync_fence *sync_fence; + int size = offsetof(struct sync_fence, cbs[num_fences]); + int i; - fence = sync_fence_alloc(offsetof(struct sync_fence, cbs[1]), name); - if (fence == NULL) + sync_fence = sync_fence_alloc(size, name); + if (sync_fence == NULL)...
2023 Aug 20
3
[PATCH drm-misc-next 0/3] [RFC] DRM GPUVA Manager GPU-VM features
So far the DRM GPUVA manager offers common infrastructure to track GPU VA allocations and mappings, generically connect GPU VA mappings to their backing buffers and perform more complex mapping operations on the GPU VA space. However, there are more design patterns commonly used by drivers, which can potentially be generalized in order to make the DRM GPUVA manager represent a basic GPU-VM
2020 Aug 28
8
[PATCH 0/6] drm/nouveau: Support sync FDs and sync objects
From: Thierry Reding <treding at nvidia.com> Hi, This series implements a new IOCTL to submit push buffers that can optionally return a sync FD or sync object to userspace. This is useful in cases where userspace wants to synchronize operations between the GPU and another driver (such as KMS for display). Among other things this allows extensions such as eglDupNativeFenceFDANDROID to be
2014 Sep 26
14
[RFC] Explicit synchronization for Nouveau
Hi guys, I'd like to start a new thread about explicit fence synchronization. This time with a Nouveau twist. :-) First, let me define what I understand by implicit/explicit sync: Implicit synchronization * Fences are attached to buffers * Kernel manages fences automatically based on buffer read/write access Explicit synchronization * Fences are passed around independently * Kernel takes
2023 Aug 28
0
[PATCH v15 11/23] dma-resv: Add kref_put_dma_resv()
...struct ww_acquire_ctx *ctx) > +{ > + return kref_put_ww_mutex(kref, release, &resv->lock, ctx); > +} > + > void dma_resv_init(struct dma_resv *obj); > void dma_resv_fini(struct dma_resv *obj); > int dma_resv_reserve_fences(struct dma_resv *obj, unsigned int num_fences);
2014 Sep 26
0
[RFC PATCH 7/7] drm/prime: Support explicit fence on export
...rime_handle_to_fd(struct drm_device *dev, struct drm_gem_object *obj; int ret = 0; struct dma_buf *dmabuf; + struct fence *fence = NULL; + + if (flags & DRM_SYNC_FD) { +#ifdef CONFIG_SYNC + struct sync_fence *sf = sync_fence_fdget(*prime_fd); + if (!sf) + return -ENOENT; + if (sf->num_fences != 1) { + sync_fence_put(sf); + return -EINVAL; + } + fence = fence_get(sf->cbs[0].sync_pt); + sync_fence_put(sf); + flags &= ~DRM_SYNC_FD; +#else + return -ENODEV; +#endif + } mutex_lock(&file_priv->prime.lock); obj = drm_gem_object_lookup(dev, file_priv, handle); @@...
2018 Jan 11
0
[PATCH 1/3] gpu: host1x: Add support for DMA fences
...ints and can therefore be + * waited using only hardware. + */ +bool host1x_fence_is_waitable(struct dma_fence *fence) +{ + struct dma_fence_array *array; + int i; + + array = to_dma_fence_array(fence); + if (!array) + return fence->ops == &host1x_fence_ops; + + for (i = 0; i < array->num_fences; ++i) { + if (array->fences[i]->ops != &host1x_fence_ops) + return false; + } + + return true; +} + +/** + * host1x_fence_wait() - Insert waits for fence into channel + * @fence: DMA fence + * @host: Host1x + * @ch: Host1x channel + * + * Inserts wait commands into Host1x channel fen...
2018 Jan 11
6
[PATCH 0/3] drm/tegra: Add support for fence FDs
From: Thierry Reding <treding at nvidia.com> This set of patches adds support for fences to Tegra DRM and complements the fence FD support for Nouveau. Technically this isn't necessary for a fence-based synchronization loop with Nouveau because the KMS core takes care of all that, but engines behind host1x can use the IOCTL extensions provided here to emit fence FDs that in turn can be
2014 Sep 29
1
[RFC PATCH 7/7] drm/prime: Support explicit fence on export
...drm_gem_object *obj; > int ret = 0; > struct dma_buf *dmabuf; > + struct fence *fence = NULL; > + > + if (flags & DRM_SYNC_FD) { > +#ifdef CONFIG_SYNC > + struct sync_fence *sf = sync_fence_fdget(*prime_fd); > + if (!sf) > + return -ENOENT; > + if (sf->num_fences != 1) { > + sync_fence_put(sf); > + return -EINVAL; > + } > + fence = fence_get(sf->cbs[0].sync_pt); > + sync_fence_put(sf); > + flags &= ~DRM_SYNC_FD; > +#else > + return -ENODEV; > +#endif > + } > > mutex_lock(&file_priv->prime.lock...
2023 Aug 31
3
[PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation
...else just let the driver open-code it and use the "building >>> blocks" - will also expand the bulding blocks to what you mentioned above. >>> >>>>>> struct drm_gpuva_exec_ops { >>>>>> ??? int (*fn) (struct drm_gpuva_exec *exec, int num_fences); >>>>> Is this the fn argument from drm_gpuva_manager_lock_extra()? >>>>> >>>>>> ??? int (*bo_validate) (struct drm_gpuva_exec *exec, struct drm_gem_object >>>>>> *obj); >>>>> I guess we could also keep that with...
2019 Apr 04
1
Proof of concept for GPU forwarding for Linux guest on Linux host.
Hi, This is a proof of concept of GPU forwarding for Linux guest on Linux host. I'd like to get comments and suggestions from community before I put more time on it. To summarize what it is: 1. It's a solution to bring GPU acceleration for Linux vm guest on Linux host. It could works with different GPU although the current proof of concept only works with Intel GPU. 2. The basic idea
2018 Jan 11
3
[PATCH 0/3] drm/nouveau: Add support for fence FDs
From: Thierry Reding <treding at nvidia.com> This small series of patches implements support for waiting on and emitting fence FDs on kickoff. This enables explicit fencing and can be used for example to synchronize buffer accesses between the display engine and the GPU on Tegra. The first patch lays the groundwork by splitting up nouveau_fence_sync() to allow reuse. Patch 2 is where the
2014 May 14
17
[RFC PATCH v1 00/16] Convert all ttm drivers to use the new reservation interface
This series depends on the previously posted reservation api patches. 2 of them are not yet in for-next-fences branch of git://git.linaro.org/people/sumit.semwal/linux-3.x.git The missing patches are still in my vmwgfx_wip branch at git://people.freedesktop.org/~mlankhorst/linux All ttm drivers are converted to the fence api, fence_lock is removed and rcu is used in its place. qxl is the first
2014 Jul 31
19
[PATCH 01/19] fence: add debugging lines to fence_is_signaled for the callback
fence_is_signaled callback should support being run in atomic context, but not in irq context. Signed-off-by: Maarten Lankhorst <maarten.lankhorst at canonical.com> --- include/linux/fence.h | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/include/linux/fence.h b/include/linux/fence.h index d174585b874b..c1a4519ba2f5 100644 ---
2014 Jul 09
22
[PATCH 00/17] Convert TTM to the new fence interface.
This series applies on top of the driver-core-next branch of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git Before converting ttm to the new fence interface I had to fix some drivers to require a reservation before poking with fence_obj. After flipping the switch RCU becomes available instead, and the extra reservations can be dropped again. :-) I've done at least basic