search for: ttm_bo_init_reserv

Displaying 20 results from an estimated 46 matches for "ttm_bo_init_reserv".

Did you mean: ttm_bo_init_reserved
2018 Jan 31
2
swiotlb buffer is full
...0x9f/0xe1 [ +0.000008] swiotlb_alloc_coherent+0xdf/0x150 [ +0.000010] ttm_dma_pool_get_pages+0x1ec/0x4b0 [ +0.000015] ttm_dma_populate+0x24c/0x340 [ +0.000011] ttm_tt_bind+0x23/0x50 [ +0.000006] ttm_bo_handle_move_mem+0x58c/0x5c0 [ +0.000015] ttm_bo_validate+0x152/0x190 [ +0.000004] ? ttm_bo_init_reserved+0x3d8/0x490 [ +0.000012] ? mutex_trylock+0xcd/0xe0 [ +0.000004] ? ttm_bo_handle_move_mem+0x58/0x5c0 [ +0.000007] ttm_bo_init_reserved+0x3f4/0x490 [ +0.000010] ttm_bo_init+0x2f/0xa0 [ +0.000009] ? nouveau_bo_invalidate_caches+0x10/0x10 [ +0.000005] nouveau_bo_new+0x416/0x590 [ +0.0000...
2018 Feb 01
1
swiotlb buffer is full
...x150 >> [ +0.000010] ttm_dma_pool_get_pages+0x1ec/0x4b0 >> [ +0.000015] ttm_dma_populate+0x24c/0x340 >> [ +0.000011] ttm_tt_bind+0x23/0x50 >> [ +0.000006] ttm_bo_handle_move_mem+0x58c/0x5c0 >> [ +0.000015] ttm_bo_validate+0x152/0x190 >> [ +0.000004] ? ttm_bo_init_reserved+0x3d8/0x490 >> [ +0.000012] ? mutex_trylock+0xcd/0xe0 >> [ +0.000004] ? ttm_bo_handle_move_mem+0x58/0x5c0 >> [ +0.000007] ttm_bo_init_reserved+0x3f4/0x490 >> [ +0.000010] ttm_bo_init+0x2f/0xa0 >> [ +0.000009] ? nouveau_bo_invalidate_caches+0x10/0x10 >&gt...
2018 Feb 01
0
swiotlb buffer is full
...lb_alloc_coherent+0xdf/0x150 > [ +0.000010] ttm_dma_pool_get_pages+0x1ec/0x4b0 > [ +0.000015] ttm_dma_populate+0x24c/0x340 > [ +0.000011] ttm_tt_bind+0x23/0x50 > [ +0.000006] ttm_bo_handle_move_mem+0x58c/0x5c0 > [ +0.000015] ttm_bo_validate+0x152/0x190 > [ +0.000004] ? ttm_bo_init_reserved+0x3d8/0x490 > [ +0.000012] ? mutex_trylock+0xcd/0xe0 > [ +0.000004] ? ttm_bo_handle_move_mem+0x58/0x5c0 > [ +0.000007] ttm_bo_init_reserved+0x3f4/0x490 > [ +0.000010] ttm_bo_init+0x2f/0xa0 > [ +0.000009] ? nouveau_bo_invalidate_caches+0x10/0x10 > [ +0.000005] nouveau...
2020 Sep 29
2
[PATCH v2 4/4] drm/qxl: use qxl pin function
Otherwise ttm throws a WARN because we try to pin without a reservation. Fixes: 9d36d4320462 ("drm/qxl: switch over to the new pin interface") Signed-off-by: Gerd Hoffmann <kraxel at redhat.com> --- drivers/gpu/drm/qxl/qxl_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index
2020 Sep 29
2
[PATCH v2 4/4] drm/qxl: use qxl pin function
Otherwise ttm throws a WARN because we try to pin without a reservation. Fixes: 9d36d4320462 ("drm/qxl: switch over to the new pin interface") Signed-off-by: Gerd Hoffmann <kraxel at redhat.com> --- drivers/gpu/drm/qxl/qxl_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index
2018 May 11
2
kernel spew from nouveau/ swiotlb
On Thu, 2018-05-10 at 12:28 +0200, Mike Galbraith wrote: > On Thu, 2018-05-10 at 11:10 +0200, Mike Galbraith wrote: > > Greetings, > > > > When box is earning its keep, nouveau/swiotlb grumble.. a LOT. The > > below is from master.today. > > > > [12594.640959] nouveau 0000:01:00.0: swiotlb buffer is full (sz: 2097152 bytes) > > [12594.693000] nouveau
2017 Dec 18
3
nouveau. swiotlb: coherent allocation failed for device 0000:01:00.0 size=2097152
...+0x2f/0x60 [ttm] [ 1313.811079] ttm_bo_handle_move_mem+0x51f/0x580 [ttm] [ 1313.811084] ? ttm_bo_handle_move_mem+0x5/0x580 [ttm] [ 1313.811088] ttm_bo_validate+0x10c/0x120 [ttm] [ 1313.811092] ? ttm_bo_validate+0x5/0x120 [ttm] [ 1313.811106] ? drm_mode_setcrtc+0x20e/0x540 [drm] [ 1313.811109] ttm_bo_init_reserved+0x290/0x490 [ttm] [ 1313.811114] ttm_bo_init+0x52/0xb0 [ttm] [ 1313.811141] ? nv10_bo_put_tile_region+0x60/0x60 [nouveau] [ 1313.811163] nouveau_bo_new+0x465/0x5e0 [nouveau] [ 1313.811184] ? nv10_bo_put_tile_region+0x60/0x60 [nouveau] [ 1313.811203] nouveau_gem_new+0x66/0x110 [nouveau] [ 131...
2019 Jun 20
0
[PATCH 5/6] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1342,9 +1342,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * away once all users are switched over. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* * F...
2019 Aug 05
2
[PATCH v6 08/17] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1343,9 +1343,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * struct elements we want use regardless. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* *...
2019 Aug 05
2
[PATCH v6 08/17] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1343,9 +1343,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * struct elements we want use regardless. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* *...
2019 Aug 05
2
[PATCH v6 08/17] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1343,9 +1343,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * struct elements we want use regardless. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* *...
2019 Aug 02
0
[PATCH v4 08/17] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1341,9 +1341,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * struct elements we want use regardless. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* *...
2019 Aug 02
0
[PATCH v4 08/17] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1341,9 +1341,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * struct elements we want use regardless. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* *...
2019 Aug 05
0
[PATCH v5 08/18] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1343,9 +1343,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * struct elements we want use regardless. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* *...
2019 Aug 05
0
[PATCH v5 08/18] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1343,9 +1343,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * struct elements we want use regardless. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* *...
2019 Aug 02
0
[PATCH v4 08/17] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1341,9 +1341,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * struct elements we want use regardless. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* *...
2019 Aug 05
0
[PATCH v5 08/18] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1343,9 +1343,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * struct elements we want use regardless. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* *...
2019 Jun 21
0
[PATCH v2 08/18] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1342,9 +1342,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * away once all users are switched over. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* * F...
2019 Jun 21
0
[PATCH v2 08/18] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1342,9 +1342,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * away once all users are switched over. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* * F...
2019 Jun 21
0
[PATCH v2 08/18] drm/ttm: use gem vma_node
...;bdev->man[bo->mem.mem_type]; - drm_vma_offset_remove(&bdev->vma_manager, &bo->vma_node); + drm_vma_offset_remove(&bdev->vma_manager, &bo->base.vma_node); ttm_mem_io_lock(man, false); ttm_mem_io_free_vm(bo); ttm_mem_io_unlock(man); @@ -1342,9 +1342,9 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * away once all users are switched over. */ reservation_object_init(&bo->base._resv); + drm_vma_node_reset(&bo->base.vma_node); } atomic_inc(&bo->bdev->glob->bo_count); - drm_vma_node_reset(&bo->vma_node); /* * F...