Lyude Paul
2022-Jul-28 22:05 UTC
[Nouveau] [RFC] drm/nouveau/ttm: Stop calling into swiotlb
Per the request of some of the DMA folks, some of the swiotlb helpers are being sunset in favor of better alternatives that don't involve layer mixing. Since it seems like we can actually replace the one thing we use swiotlb for pretty easily, e.g. checking whether or not we are capable of coherent allocations using is_swiotlb_active(). So, let's do this by replacing is_swiotlb_active() with our own nouveau_drm_use_coherent_gpu_mapping() helper. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Christoph Hellwig <hch at lst.de> --- Hey! This is the patch that I came up with, but as the folks involved with this thread can probably tell I'm not entirely sure this is correct? Also, someone more knowledgeable about mm with nouveau should probably verify this as well drivers/gpu/drm/nouveau/nouveau_ttm.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c index 85f1f5a0fe5d..ab7ccba1d601 100644 --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c @@ -24,7 +24,6 @@ */ #include <linux/limits.h> -#include <linux/swiotlb.h> #include <drm/ttm/ttm_range_manager.h> @@ -241,7 +240,6 @@ nouveau_ttm_init(struct nouveau_drm *drm) struct nvkm_pci *pci = device->pci; struct nvif_mmu *mmu = &drm->client.mmu; struct drm_device *dev = drm->dev; - bool need_swiotlb = false; int typei, ret; ret = nouveau_ttm_init_host(drm, 0); @@ -276,14 +274,11 @@ nouveau_ttm_init(struct nouveau_drm *drm) drm->agp.cma = pci->agp.cma; } -#if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86) - need_swiotlb = is_swiotlb_active(dev->dev); -#endif - ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev, - dev->anon_inode->i_mapping, - dev->vma_offset_manager, need_swiotlb, - drm->client.mmu.dmabits <= 32); + dev->anon_inode->i_mapping, + dev->vma_offset_manager, + nouveau_drm_use_coherent_gpu_mapping(drm), + drm->client.mmu.dmabits <= 32); if (ret) { NV_ERROR(drm, "error initialising bo driver, %d\n", ret); return ret; -- 2.35.3
Christoph Hellwig
2022-Jul-29 13:12 UTC
[Nouveau] [RFC] drm/nouveau/ttm: Stop calling into swiotlb
Hi Lyude, and thanks for taking a look.> -#if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86) > - need_swiotlb = is_swiotlb_active(dev->dev); > -#endif > - > ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev, > - dev->anon_inode->i_mapping, > - dev->vma_offset_manager, need_swiotlb, > - drm->client.mmu.dmabits <= 32); > + dev->anon_inode->i_mapping, > + dev->vma_offset_manager, > + nouveau_drm_use_coherent_gpu_mapping(drm), > + drm->client.mmu.dmabits <= 32);This will break setups for two reasons: - swiotlb is not only used to do device addressing limitations, so this will not catch the case of interconnect addressing limitations or forced bounce buffering which used used e.g. in secure VMs. - we might need bouncing for any DMA address below the physical address limit of the CPU But more fundamentally the use_dma32 argument to ttm_device_init is rather broken, as the onlyway to get a memory allocation that fits the DMA addressing needs of a device is to use the proper DMA mapping helpers. i.e. ttm_pool_alloc_page really needs to use dma_alloc_pages instead of alloc_pages as a first step. That way all users of the TTM pool will always get dma addressable pages and there is no need to guess the addressing limitations. The use_dma_alloc is then only needed for users that require coherent memory and are willing to deal with the limitations that this entails (e.g. no way to get at the page struct).> if (ret) { > NV_ERROR(drm, "error initialising bo driver, %d\n", ret); > return ret; > -- > 2.35.3---end quoted text---