Thomas Zimmermann
2020-Dec-09 14:25 UTC
[PATCH v3 5/8] drm/cma-helper: Provide a vmap function for short-term mappings
Implementations of the vmap/vunmap GEM callbacks may perform pinning of the BO and may acquire the associated reservation object's lock. Callers that only require a mapping of the contained memory can thus interfere with other tasks that require exact pinning, such as scanout. This is less of an issue with private CMA buffers, but may happen with imported ones. Therefore provide the new interface drm_gem_cma_vmap_local(), which only performs the vmap operations. Callers have to hold the reservation lock while the mapping persists. This patch also connects GEM CMA helpers to the GEM object function with equivalent functionality. Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> --- drivers/gpu/drm/drm_gem_cma_helper.c | 35 ++++++++++++++++++++++++++++ drivers/gpu/drm/vc4/vc4_bo.c | 13 +++++++++++ drivers/gpu/drm/vc4/vc4_drv.h | 1 + include/drm/drm_gem_cma_helper.h | 1 + 4 files changed, 50 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index 7942cf05cd93..40b3e8e3fc42 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = { .print_info = drm_gem_cma_print_info, .get_sg_table = drm_gem_cma_get_sg_table, .vmap = drm_gem_cma_vmap, + .vmap_local = drm_gem_cma_vmap_local, .mmap = drm_gem_cma_mmap, .vm_ops = &drm_gem_cma_vm_ops, }; @@ -471,6 +472,40 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) } EXPORT_SYMBOL_GPL(drm_gem_cma_vmap); +/** + * drm_gem_cma_vmap_local - map a CMA GEM object into the kernel's virtual + * address space + * @obj: GEM object + * @map: Returns the kernel virtual address of the CMA GEM object's backing + * store. + * + * This function maps a buffer into the kernel's + * virtual address space. Since the CMA buffers are already mapped into the + * kernel virtual address space this simply returns the cached virtual + * address. Drivers using the CMA helpers should set this as their DRM + * driver's &drm_gem_object_funcs.vmap_local callback. + * + * Returns: + * 0 on success, or a negative error code otherwise. + */ +int drm_gem_cma_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map) +{ + struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); + + /* + * TODO: The code in drm_gem_cma_prime_import_sg_table_vmap() + * establishes this mapping. The correct solution would + * be to call dma_buf_vmap_local() here. + * + * If we find a case where we absolutely have to call + * dma_buf_vmap_local(), the code needs to be restructured. + */ + dma_buf_map_set_vaddr(map, cma_obj->vaddr); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_cma_vmap_local); + /** * drm_gem_cma_mmap - memory-map an exported CMA GEM object * @obj: GEM object diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index dc316cb79e00..ec57326c69c4 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -387,6 +387,7 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = { .export = vc4_prime_export, .get_sg_table = drm_gem_cma_get_sg_table, .vmap = vc4_prime_vmap, + .vmap_local = vc4_prime_vmap_local, .vm_ops = &vc4_vm_ops, }; @@ -797,6 +798,18 @@ int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) return drm_gem_cma_vmap(obj, map); } +int vc4_prime_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map) +{ + struct vc4_bo *bo = to_vc4_bo(obj); + + if (bo->validated_shader) { + DRM_DEBUG("mmaping of shader BOs not allowed.\n"); + return -EINVAL; + } + + return drm_gem_cma_vmap_local(obj, map); +} + struct drm_gem_object * vc4_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h index 43a1af110b3e..efb6c47d318f 100644 --- a/drivers/gpu/drm/vc4/vc4_drv.h +++ b/drivers/gpu/drm/vc4/vc4_drv.h @@ -812,6 +812,7 @@ struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +int vc4_prime_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map); int vc4_bo_cache_init(struct drm_device *dev); int vc4_bo_inc_usecnt(struct vc4_bo *bo); void vc4_bo_dec_usecnt(struct vc4_bo *bo); diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index 0a9711caa3e8..05122e71bc6d 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -99,6 +99,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +int drm_gem_cma_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map); int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); /** -- 2.29.2
Daniel Vetter
2020-Dec-11 09:40 UTC
[PATCH v3 5/8] drm/cma-helper: Provide a vmap function for short-term mappings
On Wed, Dec 09, 2020 at 03:25:24PM +0100, Thomas Zimmermann wrote:> Implementations of the vmap/vunmap GEM callbacks may perform pinning > of the BO and may acquire the associated reservation object's lock. > Callers that only require a mapping of the contained memory can thus > interfere with other tasks that require exact pinning, such as scanout. > This is less of an issue with private CMA buffers, but may happen > with imported ones. > > Therefore provide the new interface drm_gem_cma_vmap_local(), which only > performs the vmap operations. Callers have to hold the reservation lock > while the mapping persists. > > This patch also connects GEM CMA helpers to the GEM object function with > equivalent functionality. > > Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de> > --- > drivers/gpu/drm/drm_gem_cma_helper.c | 35 ++++++++++++++++++++++++++++ > drivers/gpu/drm/vc4/vc4_bo.c | 13 +++++++++++ > drivers/gpu/drm/vc4/vc4_drv.h | 1 + > include/drm/drm_gem_cma_helper.h | 1 + > 4 files changed, 50 insertions(+) > > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c > index 7942cf05cd93..40b3e8e3fc42 100644 > --- a/drivers/gpu/drm/drm_gem_cma_helper.c > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c > @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = { > .print_info = drm_gem_cma_print_info, > .get_sg_table = drm_gem_cma_get_sg_table, > .vmap = drm_gem_cma_vmap, > + .vmap_local = drm_gem_cma_vmap_local, > .mmap = drm_gem_cma_mmap, > .vm_ops = &drm_gem_cma_vm_ops, > }; > @@ -471,6 +472,40 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > } > EXPORT_SYMBOL_GPL(drm_gem_cma_vmap); > > +/** > + * drm_gem_cma_vmap_local - map a CMA GEM object into the kernel's virtual > + * address space > + * @obj: GEM object > + * @map: Returns the kernel virtual address of the CMA GEM object's backing > + * store. > + * > + * This function maps a buffer into the kernel's > + * virtual address space. Since the CMA buffers are already mapped into the > + * kernel virtual address space this simply returns the cached virtual > + * address. Drivers using the CMA helpers should set this as their DRM > + * driver's &drm_gem_object_funcs.vmap_local callback. > + * > + * Returns: > + * 0 on success, or a negative error code otherwise. > + */ > +int drm_gem_cma_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map) > +{ > + struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); > + > + /* > + * TODO: The code in drm_gem_cma_prime_import_sg_table_vmap() > + * establishes this mapping. The correct solution would > + * be to call dma_buf_vmap_local() here. > + * > + * If we find a case where we absolutely have to call > + * dma_buf_vmap_local(), the code needs to be restructured.dma_buf_vmap_local is only relevant for dynamic importers, pinning at import time is actually what you get anyway. That's what Christian meant with his comments for the ->pin hook. So the TODO here doesn't make sense imo, just delete it. We're very far away from making cma dynamic :-)> + */ > + dma_buf_map_set_vaddr(map, cma_obj->vaddr); > + > + return 0; > +} > +EXPORT_SYMBOL_GPL(drm_gem_cma_vmap_local); > + > /** > * drm_gem_cma_mmap - memory-map an exported CMA GEM object > * @obj: GEM object > diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c > index dc316cb79e00..ec57326c69c4 100644 > --- a/drivers/gpu/drm/vc4/vc4_bo.c > +++ b/drivers/gpu/drm/vc4/vc4_bo.c > @@ -387,6 +387,7 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = { > .export = vc4_prime_export, > .get_sg_table = drm_gem_cma_get_sg_table, > .vmap = vc4_prime_vmap, > + .vmap_local = vc4_prime_vmap_local, > .vm_ops = &vc4_vm_ops, > }; > > @@ -797,6 +798,18 @@ int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > return drm_gem_cma_vmap(obj, map); > } > > +int vc4_prime_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map) > +{ > + struct vc4_bo *bo = to_vc4_bo(obj); > + > + if (bo->validated_shader) {This freaks me out. It should be impossible to export a validated shader as a dma-buf, and indeed the check exists already. All the wrapper functions here are imo pointless. Either we should remove them, or replace the if with a BUG_ON here since if that ever happens we have a security bug already. I'd go with removing, less code. Maybe throw a patch on top? Anyway this patch looks good, with the todo deleted: Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch>> + DRM_DEBUG("mmaping of shader BOs not allowed.\n"); > + return -EINVAL; > + } > + > + return drm_gem_cma_vmap_local(obj, map); > +} > + > struct drm_gem_object * > vc4_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h > index 43a1af110b3e..efb6c47d318f 100644 > --- a/drivers/gpu/drm/vc4/vc4_drv.h > +++ b/drivers/gpu/drm/vc4/vc4_drv.h > @@ -812,6 +812,7 @@ struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sgt); > int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +int vc4_prime_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map); > int vc4_bo_cache_init(struct drm_device *dev); > int vc4_bo_inc_usecnt(struct vc4_bo *bo); > void vc4_bo_dec_usecnt(struct vc4_bo *bo); > diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h > index 0a9711caa3e8..05122e71bc6d 100644 > --- a/include/drm/drm_gem_cma_helper.h > +++ b/include/drm/drm_gem_cma_helper.h > @@ -99,6 +99,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sgt); > int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +int drm_gem_cma_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map); > int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); > > /** > -- > 2.29.2 >-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch