Displaying 11 results from an estimated 11 matches for "drm_gpuva_manager_init".
2023 Jul 07
0
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...I like that. Will pick it up, thanks!
>
> ---
>
> diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c
> index e47747f22126..6427c88c22ba 100644
> --- a/drivers/gpu/drm/drm_gpuva_mgr.c
> +++ b/drivers/gpu/drm/drm_gpuva_mgr.c
> @@ -675,8 +675,7 @@ drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> const char *name,
> u64 start_offset, u64 range,
> u64 reserve_offset, u64 reserve_range,
> - const struct drm_gpuva_fn_ops *ops,
> - enum drm_gpuva_manager_flags flags)
> + const struct drm_...
2023 Feb 22
1
[PATCH drm-next v2 05/16] drm: manager to keep track of GPUs VA mappings
...cite>
However, if a driver uses regions to track its separate sparse page
tables anyway it gets 1) for free, which is a nice synergy.
I totally agree that regions aren't for everyone though. Hence, I made
them an optional feature and by default regions are disabled. In order
to use them drm_gpuva_manager_init() must be called with the
DRM_GPUVA_MANAGER_REGIONS feature flag.
I really would not want to open code regions or have two GPUVA manager
instances in nouveau to track sparse page tables. That would be really
messy, hence I hope we can agree on this to be an optional feature.
>
> I don...
2023 Aug 20
3
[PATCH drm-misc-next 0/3] [RFC] DRM GPUVA Manager GPU-VM features
So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.
However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM
2023 Feb 23
1
[PATCH drm-next v2 05/16] drm: manager to keep track of GPUs VA mappings
...e sparse page
>>> tables anyway it gets 1) for free, which is a nice synergy.
>>>
>>> I totally agree that regions aren't for everyone though. Hence, I
>>> made them an optional feature and by default regions are disabled.
>>> In order to use them drm_gpuva_manager_init() must be called with
>>> the DRM_GPUVA_MANAGER_REGIONS feature flag.
>>>
>>> I really would not want to open code regions or have two GPUVA
>>> manager instances in nouveau to track sparse page tables. That would
>>> be really messy, hence I hope we...
2023 Jan 27
1
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...lso be a single
huge one) within which it never merges.
>
> If this can be removed then the entire concept of regions in the GPUVA
> can be removed too (drop struct drm_gpuva_region). I say this because
> in Xe as I'm porting over to GPUVA the first thing I'm doing after
> drm_gpuva_manager_init is calling drm_gpuva_region_insert on the entire
> address space. To me this seems kinda useless but maybe I'm missing why
> you need this for Nouveau.
>
> Matt
>
>> +/**
>> + * @DRM_NOUVEAU_VM_BIND_OP_FREE: Free a reserved VA space region.
>> + */
>> +#...
2023 Jan 27
1
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...ne) within which it never merges.
>
>>
>> If this can be removed then the entire concept of regions in the GPUVA
>> can be removed too (drop struct drm_gpuva_region). I say this because
>> in Xe as I'm porting over to GPUVA the first thing I'm doing after
>> drm_gpuva_manager_init is calling drm_gpuva_region_insert on the entire
>> address space. To me this seems kinda useless but maybe I'm missing why
>> you need this for Nouveau.
>>
>> Matt
>>
>>> +/**
>>> + * @DRM_NOUVEAU_VM_BIND_OP_FREE: Free a reserved VA space region....
2023 Jun 29
3
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...+ * return 0;
+ * }
+ *
+ * 2) Receive a callback for each &drm_gpuva_op to create a new mapping::
+ *
+ * struct driver_context {
+ * struct drm_gpuva_manager *mgr;
+ * struct drm_gpuva *new_va;
+ * struct drm_gpuva *prev_va;
+ * struct drm_gpuva *next_va;
+ * };
+ *
+ * // ops to pass to drm_gpuva_manager_init()
+ * static const struct drm_gpuva_fn_ops driver_gpuva_ops = {
+ * .sm_step_map = driver_gpuva_map,
+ * .sm_step_remap = driver_gpuva_remap,
+ * .sm_step_unmap = driver_gpuva_unmap,
+ * };
+ *
+ * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva
+ * // structure i...
2023 Jan 27
0
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...>> the same.
>>>
>>> If this can be removed then the entire concept of regions in the GPUVA
>>> can be removed too (drop struct drm_gpuva_region). I say this because
>>> in Xe as I'm porting over to GPUVA the first thing I'm doing after
>>> drm_gpuva_manager_init is calling drm_gpuva_region_insert on the entire
>>> address space.
>>
>> Also, since you've been starting to use the code, this [1] is the branch I'm
>> pushing my fixes for a v2 to. It already contains the changes for the GPUVA
>> manager except for switch...
2023 Jul 13
1
[PATCH drm-next v7 02/13] drm: manager to keep track of GPUs VA mappings
...+ * return 0;
+ * }
+ *
+ * 2) Receive a callback for each &drm_gpuva_op to create a new mapping::
+ *
+ * struct driver_context {
+ * struct drm_gpuva_manager *mgr;
+ * struct drm_gpuva *new_va;
+ * struct drm_gpuva *prev_va;
+ * struct drm_gpuva *next_va;
+ * };
+ *
+ * // ops to pass to drm_gpuva_manager_init()
+ * static const struct drm_gpuva_fn_ops driver_gpuva_ops = {
+ * .sm_step_map = driver_gpuva_map,
+ * .sm_step_remap = driver_gpuva_remap,
+ * .sm_step_unmap = driver_gpuva_unmap,
+ * };
+ *
+ * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva
+ * // structure i...
2023 Aug 31
3
[PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize dma_resv/extobj handling and GEM validation
...concurrent
>>>>>>> + * insertion / removal of different &drm_gpuva_gems
>>>>>>> + */
>>>>>>> + spinlock_t lock;
>>>>>>> + } evict;
>>>>>>> };
>>>>>>> void drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
>>>>>>> + struct drm_device *drm,
>>>>>>> const char *name,
>>>>>>> u64 start_offset, u64 range,
>>>>>>> u64 reserve_offset, u64 reserve_range...
2023 Jul 20
2
[PATCH drm-misc-next v8 01/12] drm: manager to keep track of GPUs VA mappings
...+ * return 0;
+ * }
+ *
+ * 2) Receive a callback for each &drm_gpuva_op to create a new mapping::
+ *
+ * struct driver_context {
+ * struct drm_gpuva_manager *mgr;
+ * struct drm_gpuva *new_va;
+ * struct drm_gpuva *prev_va;
+ * struct drm_gpuva *next_va;
+ * };
+ *
+ * // ops to pass to drm_gpuva_manager_init()
+ * static const struct drm_gpuva_fn_ops driver_gpuva_ops = {
+ * .sm_step_map = driver_gpuva_map,
+ * .sm_step_remap = driver_gpuva_remap,
+ * .sm_step_unmap = driver_gpuva_unmap,
+ * };
+ *
+ * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva
+ * // structure i...