search for: dmabufs

Displaying 20 results from an estimated 107 matches for "dmabufs".

Did you mean: dmabuf
2020 May 13
2
[PATCH v3 1/4] dma-buf: add support for virtio exported objects
On Wed, Mar 11, 2020 at 12:20 PM David Stevens <stevensd at chromium.org> wrote: > > This change adds a new dma-buf operation that allows dma-bufs to be used > by virtio drivers to share exported objects. The new operation allows > the importing driver to query the exporting driver for the UUID which > identifies the underlying exported object. > > Signed-off-by: David
2020 May 13
2
[PATCH v3 1/4] dma-buf: add support for virtio exported objects
On Wed, Mar 11, 2020 at 12:20 PM David Stevens <stevensd at chromium.org> wrote: > > This change adds a new dma-buf operation that allows dma-bufs to be used > by virtio drivers to share exported objects. The new operation allows > the importing driver to query the exporting driver for the UUID which > identifies the underlying exported object. > > Signed-off-by: David
2014 Sep 26
0
[RFC PATCH 7/7] drm/prime: Support explicit fence on export
Allow user space to provide an explicit sync fence fd when exporting a dma-buf from gem handle. The fence will be stored as the explicit fence to the reservation object. Signed-off-by: Lauri Peltonen <lpeltonen at nvidia.com> --- drivers/gpu/drm/drm_prime.c | 41 +++++++++++++++++++++++++++++++++-------- include/uapi/drm/drm.h | 9 ++++++++- 2 files changed, 41 insertions(+), 9
2014 Sep 29
1
[RFC PATCH 7/7] drm/prime: Support explicit fence on export
On Fri, Sep 26, 2014 at 01:00:12PM +0300, Lauri Peltonen wrote: > Allow user space to provide an explicit sync fence fd when exporting > a dma-buf from gem handle. The fence will be stored as the explicit > fence to the reservation object. > > Signed-off-by: Lauri Peltonen <lpeltonen at nvidia.com> All existing userspace treats dma_bufs as long-lived objects. Well, all the
2015 Jul 07
5
CUDA fixed VA allocations and sparse mappings
Hello, I am currently looking into ways to support fixed virtual address allocations and sparse mappings in nouveau, as a step towards supporting CUDA. CUDA requires that the GPU virtual address for a given buffer match the CPU virtual address. Therefore, when mapping a CUDA buffer, we have to have a way of specifying a particular virtual address to map to (we would ask that the CPU virtual
2020 Aug 13
1
[PATCH 20/20] drm: Remove obsolete GEM and PRIME callbacks from struct drm_driver
Hi Thomas. On Thu, Aug 13, 2020 at 10:36:44AM +0200, Thomas Zimmermann wrote: > Several GEM and PRIME callbacks have been deprecated in favor of > per-instance GEM object functions. Remove the callbacks as they are > now unused. The only exception is .gem_prime_mmap, which is still > in use by several drivers. > > What is also gone is gem_vm_ops in struct drm_driver. All
2018 Jan 11
5
[PATCH 1/5] drm/prime: Remove duplicate forward declaration
From: Thierry Reding <treding at nvidia.com> struct device is forward-declared twice. Remove the second instance. Reviewed-by: Chris Wilson <chris at chris-wilson.co.uk> Signed-off-by: Thierry Reding <treding at nvidia.com> --- include/drm/drm_prime.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h index
2018 Sep 26
0
[PATCH 0/3] virtio: add vmap support for prime objects
Hi, > Having the support for vmapping dmabuf's is required to share > dmabufs to drivers that want CPU access. This is the case of > a vivid to virtio-gpu pipeline, where the virtio-gpu driver > exports dmabufs to the video4linux vivid driver. > > The first patch adds virtio_gpu_object_kunmap() and calls > it from the TTM object destroy path. This function wi...
2020 Aug 13
0
[PATCH 20/20] drm: Remove obsolete GEM and PRIME callbacks from struct drm_driver
Several GEM and PRIME callbacks have been deprecated in favor of per-instance GEM object functions. Remove the callbacks as they are now unused. The only exception is .gem_prime_mmap, which is still in use by several drivers. What is also gone is gem_vm_ops in struct drm_driver. All drivers now use struct drm_gem_object_funcs.vm_ops instead. While at it, the patch also improves error handling
2019 Sep 10
1
[Intel-gfx] [PATCH v6 08/17] drm/ttm: use gem vma_node
On Sat, Sep 07, 2019 at 09:58:46PM -0400, Ilia Mirkin wrote: > On Wed, Aug 21, 2019 at 7:55 AM Thierry Reding <thierry.reding at gmail.com> wrote: > > > > On Wed, Aug 21, 2019 at 04:33:58PM +1000, Ben Skeggs wrote: > > > On Wed, 14 Aug 2019 at 20:14, Gerd Hoffmann <kraxel at redhat.com> wrote: > > > > > > > > Hi, > > > >
2020 Sep 15
0
[PATCH v2 21/21] drm: Remove obsolete GEM and PRIME callbacks from struct drm_driver
Several GEM and PRIME callbacks have been deprecated in favor of per-instance GEM object functions. Remove the callbacks as they are now unused. The only exception is .gem_prime_mmap, which is still in use by several drivers. What is also gone is gem_vm_ops in struct drm_driver. All drivers now use struct drm_gem_object_funcs.vm_ops instead. While at it, the patch also improves error handling
2019 May 08
3
Re: [iGVT-g] GVT-g - suboptimal user experience
Hello. All features are about usability and simple user experience. 1. Its about local display / dmabuf feature. Currently user needs to use virt-viewer tool. But virt-manager already incorporates graphical console. It would be nice if it could support accelerated gvt-g local display. Preferably with minimum performance overhead. Also virt-manager should allow to use mdev videocard alone,
2019 Sep 16
4
[PATCH 0/4] drm/nouveau: Miscellaneous fixes
From: Thierry Reding <treding at nvidia.com> Hi Ben, these are fixes for a couple of issues that I've been running into when testing on various Tegra boards. The first two patches fix up issues in the fix that I had sent out earlier to fix the regression introduced in drm-misc-next. The first one is critical because it avoids a BUG_ON as reported by Ilia, while the second is less
2019 Apr 04
1
Proof of concept for GPU forwarding for Linux guest on Linux host.
Hi, This is a proof of concept of GPU forwarding for Linux guest on Linux host. I'd like to get comments and suggestions from community before I put more time on it. To summarize what it is: 1. It's a solution to bring GPU acceleration for Linux vm guest on Linux host. It could works with different GPU although the current proof of concept only works with Intel GPU. 2. The basic idea
2020 Mar 04
0
[PATCH v2 1/4] dma-buf: add support for virtio exported objects
On Mon, Mar 02, 2020 at 09:15:21PM +0900, David Stevens wrote: > This change adds a new dma-buf operation that allows dma-bufs to be used > by virtio drivers to share exported objects. The new operation allows > the importing driver to query the exporting driver for the UUID which > identifies the underlying exported object. > > Signed-off-by: David Stevens <stevensd at
2023 Mar 26
0
[PATCH v13 01/10] drm/shmem-helper: Switch to reservation lock
Am 25.03.23 um 15:58 schrieb Dmitry Osipenko: > On 3/15/23 16:46, Dmitry Osipenko wrote: >> On 3/14/23 05:26, Dmitry Osipenko wrote: >>> @@ -633,7 +605,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct >>> return ret; >>> } >>> >>> + dma_resv_lock(shmem->base.resv, NULL); >>> ret =
2012 Aug 11
0
[ANNOUNCE] libdrm 2.4.38
Alex Deucher (2): radeon: add some missing evergreen pci ids radeon: add some new SI pci ids Chris Wilson (1): intel: Bail gracefully if we encounter an unknown Intel device Cooper Yuan (1): libdrm/exynos: padding gem_mmap structure to 64-bit aligned Damien Lespiau (1): intel: Remove two unused variables Dave Airlie (4): libdrm: add missing caps from kernel
2019 Sep 17
0
[RFC PATCH] drm/virtio: Export resource handles via DMA-buf API
On Thu, Sep 12, 2019 at 06:41:21PM +0900, Tomasz Figa wrote: > This patch is an early RFC to judge the direction we are following in > our virtualization efforts in Chrome OS. The purpose is to start a > discussion on how to handle buffer sharing between multiple virtio > devices. > > On a side note, we are also working on a virtio video decoder interface > and
2019 Sep 13
0
[RFC PATCH] drm/virtio: Export resource handles via DMA-buf API
...as only the > host can know the requirements for memory allocation of the video > decode accelerator hardware. So you probably have some virtio-video-decoder. You allocate a gpu buffer, export it as dma-buf, import it into the decoder, then let the video decoder render to it. Right? Using dmabufs makes sense for sure. But we need an separate field in struct dma_buf for an (optional) host dmabuf identifier, we can't just hijack the dma address. cheers, Gerd
2019 Oct 08
0
[RFC PATCH] drm/virtio: Export resource handles via DMA-buf API
On Sat, Oct 05, 2019 at 02:41:54PM +0900, Tomasz Figa wrote: > Hi Daniel, Gerd, > > On Tue, Sep 17, 2019 at 10:23 PM Daniel Vetter <daniel at ffwll.ch> wrote: > > > > On Thu, Sep 12, 2019 at 06:41:21PM +0900, Tomasz Figa wrote: > > > This patch is an early RFC to judge the direction we are following in > > > our virtualization efforts in Chrome OS. The