search for: __drm_gpuva_sm_map

Displaying 6 results from an estimated 6 matches for "__drm_gpuva_sm_map".

2023 Mar 13
1
[PATCH drm-next v2 05/16] drm: manager to keep track of GPUs VA mappings
...he new >>>>> entry. >>>>> >>>>>> >>>>>> I already provided this example in a separate mail thread, but it may makes >>>>>> sense to move this to the mailing list: >>>>>> >>>>>> In __drm_gpuva_sm_map() we're iterating a given range of the tree, where the >>>>>> given range is the size of the newly requested mapping. __drm_gpuva_sm_map() >>>>>> invokes a callback for each sub-operation that needs to be taken in order to >>>>>> fulfill thi...
2023 Mar 06
2
[PATCH drm-next v2 05/16] drm: manager to keep track of GPUs VA mappings
...he store operation will now point to the new node with the new >>> entry. >>> >>>> >>>> I already provided this example in a separate mail thread, but it may makes >>>> sense to move this to the mailing list: >>>> >>>> In __drm_gpuva_sm_map() we're iterating a given range of the tree, where the >>>> given range is the size of the newly requested mapping. __drm_gpuva_sm_map() >>>> invokes a callback for each sub-operation that needs to be taken in order to >>>> fulfill this mapping request. In mo...
2023 Feb 27
2
[PATCH drm-next v2 05/16] drm: manager to keep track of GPUs VA mappings
...e may become stale, but the one you used > execute the store operation will now point to the new node with the new > entry. > >> >> I already provided this example in a separate mail thread, but it may makes >> sense to move this to the mailing list: >> >> In __drm_gpuva_sm_map() we're iterating a given range of the tree, where the >> given range is the size of the newly requested mapping. __drm_gpuva_sm_map() >> invokes a callback for each sub-operation that needs to be taken in order to >> fulfill this mapping request. In most cases such a callback...
2023 Jun 29
3
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...riv); +} + +static int +op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv, + struct drm_gpuva *va, bool merge) +{ + struct drm_gpuva_op op = {}; + + op.op = DRM_GPUVA_OP_UNMAP; + op.unmap.va = va; + op.unmap.keep = merge; + + return fn->sm_step_unmap(&op, priv); +} + +static int +__drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, + const struct drm_gpuva_fn_ops *ops, void *priv, + u64 req_addr, u64 req_range, + struct drm_gem_object *req_obj, u64 req_offset) +{ + struct drm_gpuva *va, *next, *prev = NULL; + u64 req_end = req_addr + req_range; + int ret; + + if (unlikely(!drm_gpuva_...
2023 Jul 13
1
[PATCH drm-next v7 02/13] drm: manager to keep track of GPUs VA mappings
...riv); +} + +static int +op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv, + struct drm_gpuva *va, bool merge) +{ + struct drm_gpuva_op op = {}; + + op.op = DRM_GPUVA_OP_UNMAP; + op.unmap.va = va; + op.unmap.keep = merge; + + return fn->sm_step_unmap(&op, priv); +} + +static int +__drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, + const struct drm_gpuva_fn_ops *ops, void *priv, + u64 req_addr, u64 req_range, + struct drm_gem_object *req_obj, u64 req_offset) +{ + struct drm_gpuva *va, *next, *prev = NULL; + u64 req_end = req_addr + req_range; + int ret; + + if (unlikely(!drm_gpuva_...
2023 Jul 20
2
[PATCH drm-misc-next v8 01/12] drm: manager to keep track of GPUs VA mappings
...riv); +} + +static int +op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv, + struct drm_gpuva *va, bool merge) +{ + struct drm_gpuva_op op = {}; + + op.op = DRM_GPUVA_OP_UNMAP; + op.unmap.va = va; + op.unmap.keep = merge; + + return fn->sm_step_unmap(&op, priv); +} + +static int +__drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, + const struct drm_gpuva_fn_ops *ops, void *priv, + u64 req_addr, u64 req_range, + struct drm_gem_object *req_obj, u64 req_offset) +{ + struct drm_gpuva *va, *next, *prev = NULL; + u64 req_end = req_addr + req_range; + int ret; + + if (unlikely(!drm_gpuva_...