search for: io_mem_reserv

Displaying 20 results from an estimated 61 matches for "io_mem_reserv".

Did you mean: io_mem_reserve
2020 Jul 15
3
[PATCH 1/4] drm: remove optional dummy function from drivers using TTM
...les changed, 32 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 3df685287cc1..9c0f12f74af9 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -836,10 +836,6 @@ static int amdgpu_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_ return 0; } -static void amdgpu_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) -{ -} - static unsigned long amdgpu_ttm_io_mem_pfn(struct ttm_buffer_object *bo, unsigned long page_offset) { @@ -1754,7 +1750,6 @@ stati...
2019 Sep 24
3
Is Nouveau really using the io_reserve_lru?
Hi guys, while working through more old TTM functionality I stumbled over the io_reserve_lru. Basic idea is that when this flag is set the driver->io_mem_reserve() callback can return -EAGAIN resulting in unmapping of other BOs. But Nouveau doesn't seem to return -EAGAIN in the call path of io_mem_reserve anywhere. So is this unused or am I missing something? Regards, Christian.
2019 Nov 20
2
Move io_reserve_lru handling into the driver
Just a gentle ping on this. Already got the Acked-by from Daniel, but I need some of the nouveau guys to test this since I can only compile test it. Regards, Christian.
2019 Sep 26
0
Is Nouveau really using the io_reserve_lru?
On Tue, 24 Sep 2019 at 22:19, Christian König <ckoenig.leichtzumerken at gmail.com> wrote: > > Hi guys, > > while working through more old TTM functionality I stumbled over the > io_reserve_lru. > > Basic idea is that when this flag is set the driver->io_mem_reserve() > callback can return -EAGAIN resulting in unmapping of other BOs. > > But Nouveau doesn't seem to return -EAGAIN in the call path of > io_mem_reserve anywhere. I believe this is a bug in Nouveau. We *should* be returning -EAGAIN if we fail to find space in BAR1 to map the BO in...
2019 Sep 27
0
Is Nouveau really using the io_reserve_lru?
...22:19, Christian König > <ckoenig.leichtzumerken at gmail.com> wrote: > > > > Hi guys, > > > > while working through more old TTM functionality I stumbled over the > > io_reserve_lru. > > > > Basic idea is that when this flag is set the driver->io_mem_reserve() > > callback can return -EAGAIN resulting in unmapping of other BOs. > > > > But Nouveau doesn't seem to return -EAGAIN in the call path of > > io_mem_reserve anywhere. > I believe this is a bug in Nouveau. We *should* be returning -EAGAIN > if we fail to find...
2019 Sep 30
3
[PATCH 1/2] drm/nouveau: move io_reserve_lru handling into the driver
...eg->mem_type) { + case TTM_PL_TT: + if (mem->kind) + nvif_object_unmap_handle(&mem->mem.object); + break; + case TTM_PL_VRAM: + nvif_object_unmap_handle(&mem->mem.object); + break; + default: + break; + } + } + reg->bus.base = 0; +} + static int nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *reg) { @@ -1433,18 +1479,26 @@ nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *reg) struct nouveau_drm *drm = nouveau_bdev(bdev); struct nvkm_device *device = nvxx_device(&drm->client.device); struct nouveau_m...
2013 Jul 12
2
[PATCH] drm/nouveau: kill nouveau_ttm_fault_reserve_notify handler to prevent useless buffer moves
...uct ttm_dma_tt *ttm_dma = (void *)ttm; @@ -1524,7 +1496,6 @@ struct ttm_bo_driver nouveau_bo_driver = { .sync_obj_flush = nouveau_bo_fence_flush, .sync_obj_unref = nouveau_bo_fence_unref, .sync_obj_ref = nouveau_bo_fence_ref, - .fault_reserve_notify = &nouveau_ttm_fault_reserve_notify, .io_mem_reserve = &nouveau_ttm_io_mem_reserve, .io_mem_free = &nouveau_ttm_io_mem_free, };
2020 Sep 01
0
[PATCH 3/3] drm/ttm: remove io_reserve_lru handling v2
...struct ttm_resource *mem) { - struct ttm_resource_manager *man = ttm_manager_type(bdev, mem->mem_type); - int ret; - - if (mem->bus.io_reserved_count++) + if (mem->bus.base || mem->bus.offset || mem->bus.addr) return 0; + mem->bus.is_iomem = false; if (!bdev->driver->io_mem_reserve) return 0; - mem->bus.addr = NULL; - mem->bus.offset = 0; - mem->bus.base = 0; - mem->bus.is_iomem = false; -retry: - ret = bdev->driver->io_mem_reserve(bdev, mem); - if (ret == -ENOSPC) { - ret = ttm_mem_io_evict(man); - if (ret == 0) - goto retry; - } - return ret; + r...
2020 Jan 24
4
TTM/Nouveau cleanups
Hi guys, I've already send this out in September last year, but only got a response from Daniel. Could you guys please test this and tell me what you think about it? Basically I'm trying to remove all driver specific features from TTM which don't need to be inside the framework. Thanks, Christian.
2019 Aug 08
0
[PATCH v3 8/8] gem/qxl: use drm_gem_ttm_bo_driver_verify_access()
...r_object *bo, *placement = qbo->placement; } -static int qxl_verify_access(struct ttm_buffer_object *bo, struct file *filp) -{ - struct qxl_bo *qbo = to_qxl_bo(bo); - - return drm_vma_node_verify_access(&qbo->tbo.base.vma_node, - filp->private_data); -} - static int qxl_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) { @@ -310,7 +302,7 @@ static struct ttm_bo_driver qxl_bo_driver = { .eviction_valuable = ttm_bo_eviction_valuable, .evict_flags = &qxl_evict_flags, .move = &qxl_bo_move, - .verify_access = &qxl_verify_access, + .verify_...
2019 Aug 08
0
[PATCH v4 16/17] drm/qxl: drop verify_access
...r_object *bo, *placement = qbo->placement; } -static int qxl_verify_access(struct ttm_buffer_object *bo, struct file *filp) -{ - struct qxl_bo *qbo = to_qxl_bo(bo); - - return drm_vma_node_verify_access(&qbo->tbo.base.vma_node, - filp->private_data); -} - static int qxl_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) { @@ -269,7 +261,6 @@ static struct ttm_bo_driver qxl_bo_driver = { .eviction_valuable = ttm_bo_eviction_valuable, .evict_flags = &qxl_evict_flags, .move = &qxl_bo_move, - .verify_access = &qxl_verify_access, .io_mem_...
2019 Oct 17
0
[PATCH 3/5] drm/qxl: drop verify_access
...r_object *bo, *placement = qbo->placement; } -static int qxl_verify_access(struct ttm_buffer_object *bo, struct file *filp) -{ - struct qxl_bo *qbo = to_qxl_bo(bo); - - return drm_vma_node_verify_access(&qbo->tbo.base.vma_node, - filp->private_data); -} - static int qxl_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) { @@ -269,7 +261,6 @@ static struct ttm_bo_driver qxl_bo_driver = { .eviction_valuable = ttm_bo_eviction_valuable, .evict_flags = &qxl_evict_flags, .move = &qxl_bo_move, - .verify_access = &qxl_verify_access, .io_mem_...
2013 Jul 15
0
[PATCH] drm/nouveau: kill nouveau_ttm_fault_reserve_notify handler to prevent useless buffer moves
...,6 @@ struct ttm_bo_driver nouveau_bo_driver = { > .sync_obj_flush = nouveau_bo_fence_flush, > .sync_obj_unref = nouveau_bo_fence_unref, > .sync_obj_ref = nouveau_bo_fence_ref, > - .fault_reserve_notify = &nouveau_ttm_fault_reserve_notify, > .io_mem_reserve = &nouveau_ttm_io_mem_reserve, > .io_mem_free = &nouveau_ttm_io_mem_free, > }; > > _______________________________________________ > dri-devel mailing list > dri-devel at lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel
2018 Dec 19
0
[PATCH 04/10] drm/virtio: move virtio_gpu_object_{attach, detach} calls.
...,11 +272,9 @@ static struct ttm_bo_driver virtio_gpu_bo_driver = { .init_mem_type = &virtio_gpu_init_mem_type, .eviction_valuable = ttm_bo_eviction_valuable, .evict_flags = &virtio_gpu_evict_flags, - .move = &virtio_gpu_bo_move, .verify_access = &virtio_gpu_verify_access, .io_mem_reserve = &virtio_gpu_ttm_io_mem_reserve, .io_mem_free = &virtio_gpu_ttm_io_mem_free, - .move_notify = &virtio_gpu_bo_move_notify, .swap_notify = &virtio_gpu_bo_swap_notify, }; -- 2.9.3
2019 Mar 18
0
[PATCH v3 1/5] drm/virtio: move virtio_gpu_object_{attach, detach} calls.
...,11 +272,9 @@ static struct ttm_bo_driver virtio_gpu_bo_driver = { .init_mem_type = &virtio_gpu_init_mem_type, .eviction_valuable = ttm_bo_eviction_valuable, .evict_flags = &virtio_gpu_evict_flags, - .move = &virtio_gpu_bo_move, .verify_access = &virtio_gpu_verify_access, .io_mem_reserve = &virtio_gpu_ttm_io_mem_reserve, .io_mem_free = &virtio_gpu_ttm_io_mem_free, - .move_notify = &virtio_gpu_bo_move_notify, .swap_notify = &virtio_gpu_bo_swap_notify, }; -- 2.18.1
2024 Oct 15
5
[PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com> This patch series aims to enable Peer-to-Peer (P2P) DMA access in GPU-centric applications that utilize RDMA and private device pages. This enhancement is crucial for minimizing data transfer overhead by allowing the GPU to directly expose device private page data to devices such as NICs, eliminating the need to traverse system RAM, which is the
2019 Apr 09
0
[PATCH 13/15] drm/vboxvideo: Convert vboxvideo driver to Simple TTM
...TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_WC; > - man->default_caching = TTM_PL_FLAG_WC; > - break; > - default: > - DRM_ERROR("Unsupported memory type %u\n", (unsigned int)type); > - return -EINVAL; > - } > - > - return 0; > -} > - > -static int vbox_ttm_io_mem_reserve(struct ttm_bo_device *bdev, > - struct ttm_mem_reg *mem) > -{ > - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; > - struct vbox_private *vbox = vbox_bdev(bdev); > - > - mem->bus.addr = NULL; > - mem->bus.offset = 0; > - mem->bus.size...
2020 Sep 01
4
[PATCH 1/3] drm/ttm: make sure that we always zero init mem.bus v2
We are trying to remove the io_lru handling and depend on zero init base, offset and addr here. v2: init addr as well Signed-off-by: Christian K?nig <christian.koenig at amd.com> --- drivers/gpu/drm/ttm/ttm_bo.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index e3931e515906..772c640a6046 100644 ---
2020 Aug 21
5
Moving LRU handling into Nouveau v3
Hi guys, so I got some hardware and tested this and after hammering out tons of typos it now seems to work fine. Could you give it more testing? Thanks in advance, Christian
2024 Dec 01
5
[RFC 0/5] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com> Based on: Provide a new two step DMA mapping API patchset https://lore.kernel.org/kvm/20241114170247.GA5813 at lst.de/T/#t This patch series aims to enable Peer-to-Peer (P2P) DMA access in GPU-centric applications that utilize RDMA and private device pages. This enhancement reduces data transfer overhead by allowing the GPU to directly expose