Displaying 6 results from an estimated 6 matches for "drm_ioctl_prime_fd_to_handl".
Did you mean:
drm_ioctl_prime_fd_to_handle
2015 Jul 07
5
CUDA fixed VA allocations and sparse mappings
...to the virtual address pool in the case of the free
ioctl). If NOUVEAU_AS_ALLOC_FLAGS_FIXED_OFFSET is set, offset specifies the
requested virtual address. Otherwise, an arbitrary address will be
allocated.
In addition to this, a way to map/unmap buffers is needed. Ordinarily, one
would just use DRM_IOCTL_PRIME_FD_TO_HANDLE to import and map a dmabuf into
gem. However, this ioctl will try to grab the virtual address range for this
buffer, which will fail in the CUDA case since the virtual address range
has been reserved ahead of time. So we perhaps introduce a set of ioctls
to map/unmap buffers on top of an already...
2014 Sep 26
0
[RFC PATCH 7/7] drm/prime: Support explicit fence on export
...ude/uapi/drm/drm.h
@@ -661,13 +661,20 @@ struct drm_set_client_cap {
};
#define DRM_CLOEXEC O_CLOEXEC
+#define DRM_SYNC_FD O_DSYNC
struct drm_prime_handle {
__u32 handle;
/** Flags.. only applicable for handle->fd */
__u32 flags;
- /** Returned dmabuf file descriptor */
+ /**
+ * DRM_IOCTL_PRIME_FD_TO_HANDLE:
+ * in: dma-buf fd
+ * DRM_IOCTL_PRIME_HANDLE_TO_FD:
+ * in: sync fence fd if DRM_SYNC_FD flag is passed
+ * out: dma-buf fd
+ */
__s32 fd;
};
--
1.8.1.5
2015 Jul 07
2
CUDA fixed VA allocations and sparse mappings
...ng is invalidated or updated to point to system memory. So
most of the logic for everything else remain the same (just need to update
the multiple virtual address space).
>
> >
> > In addition to this, a way to map/unmap buffers is needed. Ordinarily, one
> > would just use DRM_IOCTL_PRIME_FD_TO_HANDLE to import and map a dmabuf into
> > gem. However, this ioctl will try to grab the virtual address range for this
> > buffer, which will fail in the CUDA case since the virtual address range
> > has been reserved ahead of time. So we perhaps introduce a set of ioctls
> > t...
2014 Sep 29
1
[RFC PATCH 7/7] drm/prime: Support explicit fence on export
...p {
> };
>
> #define DRM_CLOEXEC O_CLOEXEC
> +#define DRM_SYNC_FD O_DSYNC
> struct drm_prime_handle {
> __u32 handle;
>
> /** Flags.. only applicable for handle->fd */
> __u32 flags;
>
> - /** Returned dmabuf file descriptor */
> + /**
> + * DRM_IOCTL_PRIME_FD_TO_HANDLE:
> + * in: dma-buf fd
> + * DRM_IOCTL_PRIME_HANDLE_TO_FD:
> + * in: sync fence fd if DRM_SYNC_FD flag is passed
> + * out: dma-buf fd
> + */
> __s32 fd;
> };
>
> --
> 1.8.1.5
>
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365...
2019 Apr 04
1
Proof of concept for GPU forwarding for Linux guest on Linux host.
Hi,
This is a proof of concept of GPU forwarding for Linux guest on Linux host.
I'd like to get comments and suggestions from community before I put more
time on it. To summarize what it is:
1. It's a solution to bring GPU acceleration for Linux vm guest on Linux host.
It could works with different GPU although the current proof of concept only
works with Intel GPU.
2. The basic idea
2014 Sep 26
14
[RFC] Explicit synchronization for Nouveau
Hi guys,
I'd like to start a new thread about explicit fence synchronization. This time
with a Nouveau twist. :-)
First, let me define what I understand by implicit/explicit sync:
Implicit synchronization
* Fences are attached to buffers
* Kernel manages fences automatically based on buffer read/write access
Explicit synchronization
* Fences are passed around independently
* Kernel takes