search for: bdevs

Displaying 20 results from an estimated 617 matches for "bdevs".

Did you mean: bdev
2014 May 14
0
[RFC PATCH v1 06/16] drm/ttm: kill fence_lock
No users are left, kill it off! :D Conversion to the reservation api is next on the list, after that the functionality can be restored with rcu. Signed-off-by: Maarten Lankhorst <maarten.lankhorst at canonical.com> --- drivers/gpu/drm/nouveau/nouveau_bo.c | 25 +++------- drivers/gpu/drm/nouveau/nouveau_display.c | 6 -- drivers/gpu/drm/nouveau/nouveau_gem.c | 16 +-----
2020 Jul 15
3
[PATCH 1/4] drm: remove optional dummy function from drivers using TTM
Implementing those is completely unecessary. Signed-off-by: Christian K?nig <christian.koenig at amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 5 ----- drivers/gpu/drm/drm_gem_vram_helper.c | 5 ----- drivers/gpu/drm/qxl/qxl_ttm.c | 6 ------ drivers/gpu/drm/radeon/radeon_ttm.c | 5 ----- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 11 -----------
2019 Sep 05
1
[PATCH 1/8] drm/ttm: turn ttm_bo_device.vma_manager into a pointer
Rename the embedded struct vma_offset_manager, new name is _vma_manager. ttm_bo_device.vma_manager changed to a pointer. The ttm_bo_device_init() function gets an additional vma_manager argument which allows to initialize ttm with a different vma manager. When passing NULL the embedded _vma_manager is used. All callers are updated to pass NULL, so the behavior doesn't change. Signed-off-by:
2019 Sep 05
1
[PATCH 1/8] drm/ttm: turn ttm_bo_device.vma_manager into a pointer
Rename the embedded struct vma_offset_manager, new name is _vma_manager. ttm_bo_device.vma_manager changed to a pointer. The ttm_bo_device_init() function gets an additional vma_manager argument which allows to initialize ttm with a different vma manager. When passing NULL the embedded _vma_manager is used. All callers are updated to pass NULL, so the behavior doesn't change. Signed-off-by:
2019 Sep 05
1
[PATCH 1/8] drm/ttm: turn ttm_bo_device.vma_manager into a pointer
Rename the embedded struct vma_offset_manager, new name is _vma_manager. ttm_bo_device.vma_manager changed to a pointer. The ttm_bo_device_init() function gets an additional vma_manager argument which allows to initialize ttm with a different vma manager. When passing NULL the embedded _vma_manager is used. All callers are updated to pass NULL, so the behavior doesn't change. Signed-off-by:
2018 Feb 27
4
[PATCH 4/5] drm/ttm: add ttm_sg_tt_init
Hi guys, at least on amdgpu and radeon the page array allocated by ttm_dma_tt_init is completely unused in the case of DMA-buf sharing. So I'm trying to get rid of that by only allocating the DMA address array. Now the only other user of DMA-buf together with ttm_dma_tt_init is Nouveau. So my question is are you guys using the page array anywhere in your kernel driver in case of a
2020 Sep 01
0
[PATCH 3/3] drm/ttm: remove io_reserve_lru handling v2
From: Christian K?nig <ckoenig.leichtzumerken at gmail.com> That is not used any more. v2: keep the NULL checks in TTM. Signed-off-by: Christian K?nig <christian.koenig at amd.com> Acked-by: Daniel Vetter <daniel.vetter at ffwll.ch> --- drivers/gpu/drm/ttm/ttm_bo.c | 34 +-------- drivers/gpu/drm/ttm/ttm_bo_util.c | 113 +++--------------------------
2019 Nov 20
2
Move io_reserve_lru handling into the driver
Just a gentle ping on this. Already got the Acked-by from Daniel, but I need some of the nouveau guys to test this since I can only compile test it. Regards, Christian.
2020 Sep 01
10
remove revalidate_disk()
Hi Jens, this series removes the revalidate_disk() function, which has been a really odd duck in the last years. The prime reason why most people use it is because it propagates a size change from the gendisk to the block_device structure. But it also calls into the rather ill defined ->revalidate_disk method which is rather useless for the callers. So this adds a new helper to just
2012 Feb 16
2
[PATCH] blkfront: don't put bdev right after getting it
We should hang onto bdev until we're done with it. Signed-off-by: Andrew Jones <drjones at redhat.com> --- drivers/block/xen-blkfront.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 2f22874..5d45688 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -1410,7 +1410,6
2012 Feb 16
2
[PATCH] blkfront: don't put bdev right after getting it
We should hang onto bdev until we're done with it. Signed-off-by: Andrew Jones <drjones at redhat.com> --- drivers/block/xen-blkfront.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 2f22874..5d45688 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -1410,7 +1410,6
2019 Sep 30
3
[PATCH 1/2] drm/nouveau: move io_reserve_lru handling into the driver
While working on TTM cleanups I've found that the io_reserve_lru used by Nouveau is actually not working at all. In general we should remove driver specific handling from the memory management, so this patch moves the io_reserve_lru handling into Nouveau instead. The patch should be functional correct, but is only compile tested! Signed-off-by: Christian König <christian.koenig at
2020 Jan 24
4
TTM/Nouveau cleanups
Hi guys, I've already send this out in September last year, but only got a response from Daniel. Could you guys please test this and tell me what you think about it? Basically I'm trying to remove all driver specific features from TTM which don't need to be inside the framework. Thanks, Christian.
2020 Aug 21
5
Moving LRU handling into Nouveau v3
Hi guys, so I got some hardware and tested this and after hammering out tons of typos it now seems to work fine. Could you give it more testing? Thanks in advance, Christian
2020 Sep 01
4
[PATCH 1/3] drm/ttm: make sure that we always zero init mem.bus v2
We are trying to remove the io_lru handling and depend on zero init base, offset and addr here. v2: init addr as well Signed-off-by: Christian K?nig <christian.koenig at amd.com> --- drivers/gpu/drm/ttm/ttm_bo.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index e3931e515906..772c640a6046 100644 ---
2019 Sep 27
5
[PATCH 1/2] drm/qxl: stop abusing TTM to call driver internal functions
The ttm_mem_io_* functions are actually internal to TTM and shouldn't be used in a driver. Instead call the qxl_ttm_io_mem_reserve() function directly. Signed-off-by: Christian K?nig <christian.koenig at amd.com> --- drivers/gpu/drm/qxl/qxl_drv.h | 2 ++ drivers/gpu/drm/qxl/qxl_object.c | 11 +---------- drivers/gpu/drm/qxl/qxl_ttm.c | 4 ++-- 3 files changed, 5 insertions(+),
2018 Mar 05
0
[PATCH 4/5] drm/ttm: add ttm_sg_tt_init
Ping? Am 27.02.2018 um 13:07 schrieb Christian König: > Hi guys, > > at least on amdgpu and radeon the page array allocated by > ttm_dma_tt_init is completely unused in the case of DMA-buf sharing. > So I'm trying to get rid of that by only allocating the DMA address > array. > > Now the only other user of DMA-buf together with ttm_dma_tt_init is > Nouveau. So
2019 Sep 30
2
[Spice-devel] [PATCH 1/2] drm/qxl: stop abusing TTM to call driver internal functions
Am 27.09.19 um 18:31 schrieb Frediano Ziglio: >> The ttm_mem_io_* functions are actually internal to TTM and shouldn't be >> used in a driver. >> > As far as I can see by your second patch QXL is just using exported > (that is not internal) functions. > Not that the idea of making them internal is bad but this comment is > a wrong statement. See the history of
2019 Sep 30
2
[Spice-devel] [PATCH 1/2] drm/qxl: stop abusing TTM to call driver internal functions
Am 27.09.19 um 18:31 schrieb Frediano Ziglio: >> The ttm_mem_io_* functions are actually internal to TTM and shouldn't be >> used in a driver. >> > As far as I can see by your second patch QXL is just using exported > (that is not internal) functions. > Not that the idea of making them internal is bad but this comment is > a wrong statement. See the history of
2009 Aug 19
1
[PATCH] drm/nouveau: Add a MM for mappable VRAM that isn't usable as scanout.
Dynamically resizing the framebuffer on nv04 was like playing Russian roulette (and it often happened gratuitously) because it seems unable to scan out from buffers above 16MB. This patch splits the mappable VRAM into two chunks when that's the case, and makes the higher one to be used as well when applicable. Signed-off-by: Francisco Jerez <currojerez at riseup.net> ---