search for: gpuva

Displaying 20 results from an estimated 32 matches for "gpuva".

2023 Jun 23
1
[PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings
...de <drm/drm_vma_manager.h> >>>>>>>> @@ -379,6 +381,18 @@ struct drm_gem_object { >>>>>>>> ?????? */ >>>>>>>> ????? struct dma_resv _resv; >>>>>>>> +??? /** >>>>>>>> +???? * @gpuva: >>>>>>>> +???? * >>>>>>>> +???? * Provides the list of GPU VAs attached to this GEM object. >>>>>>>> +???? * >>>>>>>> +???? * Drivers should lock list accesses with the GEMs >>>>>>&gt...
2023 Jun 23
1
[PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings
...anager.h> >>>>>>>>> @@ -379,6 +381,18 @@ struct drm_gem_object { >>>>>>>>> ?????? */ >>>>>>>>> ????? struct dma_resv _resv; >>>>>>>>> +??? /** >>>>>>>>> +???? * @gpuva: >>>>>>>>> +???? * >>>>>>>>> +???? * Provides the list of GPU VAs attached to this GEM object. >>>>>>>>> +???? * >>>>>>>>> +???? * Drivers should lock list accesses with the GEMs >>&gt...
2023 Jun 22
2
[PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings
...gt;>> ? #include <drm/drm_vma_manager.h> >>>>>>> @@ -379,6 +381,18 @@ struct drm_gem_object { >>>>>>> ?????? */ >>>>>>> ????? struct dma_resv _resv; >>>>>>> +??? /** >>>>>>> +???? * @gpuva: >>>>>>> +???? * >>>>>>> +???? * Provides the list of GPU VAs attached to this GEM object. >>>>>>> +???? * >>>>>>> +???? * Drivers should lock list accesses with the GEMs &dma_resv >>>>>>>...
2023 Jan 27
2
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...h: > [SNIP] >>>> >>>> What you want is one component for tracking the VA allocations >>>> (drm_mm based) and a different component/interface for tracking the >>>> VA mappings (probably rb tree based). >>> >>> That's what the GPUVA manager is doing. There are gpuva_regions >>> which correspond to VA allocations and gpuvas which represent the >>> mappings. Both are tracked separately (currently both with a >>> separate drm_mm, though). However, the GPUVA manager needs to take >>> regions...
2023 Jan 30
2
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...What you want is one component for tracking the VA allocations >>>>>>> (drm_mm based) and a different component/interface for tracking >>>>>>> the VA mappings (probably rb tree based). >>>>>> >>>>>> That's what the GPUVA manager is doing. There are gpuva_regions >>>>>> which correspond to VA allocations and gpuvas which represent the >>>>>> mappings. Both are tracked separately (currently both with a >>>>>> separate drm_mm, though). However, the GPUVA manager...
2023 Jan 27
1
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...gt;>> >>>>> What you want is one component for tracking the VA allocations >>>>> (drm_mm based) and a different component/interface for tracking the >>>>> VA mappings (probably rb tree based). >>>> >>>> That's what the GPUVA manager is doing. There are gpuva_regions >>>> which correspond to VA allocations and gpuvas which represent the >>>> mappings. Both are tracked separately (currently both with a >>>> separate drm_mm, though). However, the GPUVA manager needs to take >>...
2023 Mar 16
0
[PATCH drm-next 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI
...mappings. But that's > really the extreme case imo. I assume most mappings will be much > larger. In fact, in the most realistic scenario of large-scale > training, a single user will probably map the entire HBM memory using > 1GB pages. > > I have also a question, could this GPUVA code manage VA ranges > mappings for userptr mappings, assuming we work without svm/uva/usm > (pointer-is-a-pointer) ? Because then we are talking about possible > 4KB mappings of 1 - 1.5 TB host server RAM (Implied in my question is > the assumption this can be used also for non-VK use...
2023 Jul 07
0
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...ris Brezillon <boris.brezillon at collabora.com> wrote: > >> On Fri, 30 Jun 2023 00:25:18 +0200 >> Danilo Krummrich <dakr at redhat.com> wrote: >> >>> +#ifdef CONFIG_LOCKDEP >>> +typedef struct lockdep_map *lockdep_map_p; >>> +#define drm_gpuva_manager_ext_assert_held(mgr) \ >>> + lockdep_assert(lock_is_held((mgr)->ext_lock) != LOCK_STATE_NOT_HELD) >>> +/** >>> + * drm_gpuva_manager_set_ext_lock - set the external lock according to >>> + * @DRM_GPUVA_MANAGER_LOCK_EXTERN >>> + * @mgr: the &a...
2023 Aug 20
3
[PATCH drm-misc-next 0/3] [RFC] DRM GPUVA Manager GPU-VM features
So far the DRM GPUVA manager offers common infrastructure to track GPU VA allocations and mappings, generically connect GPU VA mappings to their backing buffers and perform more complex mapping operations on the GPU VA space. However, there are more design patterns commonly used by drivers, which can potentially be ge...
2023 Jan 27
1
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...>> > >>>> What you want is one component for tracking the VA allocations > >>>> (drm_mm based) and a different component/interface for tracking the > >>>> VA mappings (probably rb tree based). > >>> > >>> That's what the GPUVA manager is doing. There are gpuva_regions > >>> which correspond to VA allocations and gpuvas which represent the > >>> mappings. Both are tracked separately (currently both with a > >>> separate drm_mm, though). However, the GPUVA manager needs to take > >...
2023 Jan 27
1
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...based) and a different component/interface for tracking the VA mappings (probably rb tree based). amdgpu has even gotten so far that the VA allocations are tracked in libdrm in userspace. Regards, Christian. > > It serves two purposes: > > 1. It gives the kernel (in particular the GPUVA manager) the bounds in > which it is allowed to merge mappings. E.g. when a user request asks > for a new mapping and we detect we could merge this mapping with an > existing one (used in another VKBuffer than the mapping request came > for) the driver is not allowed to change the p...
2023 Jul 20
1
[PATCH drm-misc-next v8 01/12] drm: manager to keep track of GPUs VA mappings
...t; Tested-by: Matthew Brost <matthew.brost at intel.com> > Tested-by: Donald Robson <donald.robson at imgtec.com> > Suggested-by: Dave Airlie <airlied at redhat.com> > Signed-off-by: Danilo Krummrich <dakr at redhat.com> [...] > diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c > new file mode 100644 > index 000000000000..dee2235530d6 > --- /dev/null > +++ b/drivers/gpu/drm/drm_gpuva_mgr.c [...] > +static bool > +drm_gpuva_check_overflow(u64 addr, u64 range) > +{ > + u64 end; > + > + return WARN(check...
2023 Mar 10
0
[PATCH drm-next v2 00/16] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI
...us binds/unbinds through the job scheduler is a concern I think it could be beneficial to (pre-)allocate page tables for newly requested mappings without the need to know whether there are existing mappings within this range already (ideally without tracking page table allocations separate from GPUVAs), such that we can update the VA space at job execution time. Same thing for freeing page tables for a range that only partially contains mappings at all. For that, reference counting page tables per mapping wouldn't really work. On the other hand we need to consider that freeing page tab...
2023 Mar 10
2
[PATCH drm-next v2 00/16] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI
...iliar at all with Nouveau or TTM, and it might >> be something that's solved by another component, or I'm just >> misunderstanding how the whole thing is supposed to work. This being >> said, I'd really like to implement a VM_BIND-like uAPI in pancsf using >> the gpuva_manager infra you're proposing here, so please bare with me >> :-). >> >>> 2. (un-)map the requested memory bindings >>> 3. free structures and page tables >>> >>> - Separated generic job scheduler code from specific job imp...
2023 Jul 07
0
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
On 7/7/23 13:00, Boris Brezillon wrote: > On Fri, 30 Jun 2023 00:25:18 +0200 > Danilo Krummrich <dakr at redhat.com> wrote: > >> +/** >> + * drm_gpuva_for_each_va_range - iternator to walk over a range of &drm_gpuvas >> + * @va__: &drm_gpuva structure to assign to in each iteration step >> + * @mgr__: &drm_gpuva_manager to walk over >> + * @start__: starting offset, the first gpuva will overlap this >> + * @end...
2023 Jul 20
2
[PATCH drm-misc-next v8 01/12] drm: manager to keep track of GPUs VA mappings
...t;donald.robson at imgtec.com> Suggested-by: Dave Airlie <airlied at redhat.com> Signed-off-by: Danilo Krummrich <dakr at redhat.com> --- Documentation/gpu/drm-mm.rst | 36 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_gem.c | 3 + drivers/gpu/drm/drm_gpuva_mgr.c | 1728 +++++++++++++++++++++++++++++++ include/drm/drm_drv.h | 6 + include/drm/drm_gem.h | 79 ++ include/drm/drm_gpuva_mgr.h | 706 +++++++++++++ 7 files changed, 2559 insertions(+) create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c create mode 100644 includ...
2023 Jul 13
1
[PATCH drm-next v7 02/13] drm: manager to keep track of GPUs VA mappings
...t;donald.robson at imgtec.com> Suggested-by: Dave Airlie <airlied at redhat.com> Signed-off-by: Danilo Krummrich <dakr at redhat.com> --- Documentation/gpu/drm-mm.rst | 36 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_gem.c | 3 + drivers/gpu/drm/drm_gpuva_mgr.c | 1730 +++++++++++++++++++++++++++++++ include/drm/drm_drv.h | 6 + include/drm/drm_gem.h | 79 ++ include/drm/drm_gpuva_mgr.h | 706 +++++++++++++ 7 files changed, 2561 insertions(+) create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c create mode 100644 includ...
2023 Jun 29
3
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...is.brezillon at collabora.com> Suggested-by: Dave Airlie <airlied at redhat.com> Signed-off-by: Danilo Krummrich <dakr at redhat.com> --- Documentation/gpu/drm-mm.rst | 36 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_gem.c | 3 + drivers/gpu/drm/drm_gpuva_mgr.c | 1743 +++++++++++++++++++++++++++++++ include/drm/drm_drv.h | 6 + include/drm/drm_gem.h | 52 + include/drm/drm_gpuva_mgr.h | 756 ++++++++++++++ 7 files changed, 2597 insertions(+) create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c create mode 100644 includ...
2023 Jan 27
1
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...e a VM and the entire address space is managed exactly > the same. The idea for alloc/free is to let UMDs allocate a portion of the VA space (which I call a region), basically the same thing Vulkan represents with a VKBuffer. It serves two purposes: 1. It gives the kernel (in particular the GPUVA manager) the bounds in which it is allowed to merge mappings. E.g. when a user request asks for a new mapping and we detect we could merge this mapping with an existing one (used in another VKBuffer than the mapping request came for) the driver is not allowed to change the page table for the ex...
2023 Jan 27
0
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...C 0x0 >>> >>> Do you really need this operation? We have no concept of this in Xe, >>> e.g. we can create a VM and the entire address space is managed exactly >>> the same. >>> >>> If this can be removed then the entire concept of regions in the GPUVA >>> can be removed too (drop struct drm_gpuva_region). I say this because >>> in Xe as I'm porting over to GPUVA the first thing I'm doing after >>> drm_gpuva_manager_init is calling drm_gpuva_region_insert on the entire >>> address space. >> >&g...