Displaying 11 results from an estimated 11 matches for "radeon_gem_domain_cpu".
2019 Oct 28
0
[PATCH v2 07/15] drm/radeon: use mmu_range_notifier_insert
..."(%ld) failed to reserve user bo\n", r);
- continue;
- }
-
- r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
- true, false, MAX_SCHEDULE_TIMEOUT);
- if (r <= 0)
- DRM_ERROR("(%ld) failed to wait for user bo\n", r);
-
- radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);
- r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
- if (r)
- DRM_ERROR("(%ld) failed to validate user bo\n", r);
-
- radeon_bo_unreserve(bo);
- }
+ r = radeon_bo_reserve(bo, true);
+ if (r) {
+ DRM_ERROR("(%ld) failed to reserve user bo\n", r)...
2019 Oct 29
0
[PATCH v2 07/15] drm/radeon: use mmu_range_notifier_insert
..., r);
> - continue;
> - }
> -
> - r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
> - true, false, MAX_SCHEDULE_TIMEOUT);
> - if (r <= 0)
> - DRM_ERROR("(%ld) failed to wait for user bo\n", r);
> -
> - radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);
> - r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
> - if (r)
> - DRM_ERROR("(%ld) failed to validate user bo\n", r);
> -
> - radeon_bo_unreserve(bo);
> - }
> + r = radeon_bo_reserve(bo, true);
> + if (r) {
> + DRM_ERROR(&quo...
2014 May 14
0
[RFC PATCH v1 14/16] drm/radeon: use rcu waits in some ioctls
...rivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index d09650c1d720..7ba883843668 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -107,9 +107,12 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
}
if (domain == RADEON_GEM_DOMAIN_CPU) {
/* Asking for cpu access wait for object idle */
- r = radeon_bo_wait(robj, NULL, false);
- if (r) {
- printk(KERN_ERR "Failed to wait for object !\n");
+ r = reservation_object_wait_timeout_rcu(robj->tbo.resv, true, true, 30 * HZ);
+ if (!r)
+ r = -EBUSY;
+
+ if (r <...
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2017 Feb 28
2
[PATCH 0/2] gpu: drm: Use pr_cont and neaten logging
Joe Perches (2):
drm: Use pr_cont where appropriate
gpu: drm: Convert printk(KERN_<LEVEL> to pr_<level>
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c | 4 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c | 4 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +-
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2017 Feb 28
0
[PATCH 2/2] gpu: drm: Convert printk(KERN_<LEVEL> to pr_<level>
.../radeon/radeon_gem.c
@@ -106,7 +106,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
}
if (!domain) {
/* Do nothings */
- printk(KERN_WARNING "Set domain without domain !\n");
+ pr_warn("Set domain without domain !\n");
return 0;
}
if (domain == RADEON_GEM_DOMAIN_CPU) {
@@ -116,7 +116,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
r = -EBUSY;
if (r < 0 && r != -EINTR) {
- printk(KERN_ERR "Failed to wait for object: %li\n", r);
+ pr_err("Failed to wait for object: %li\n", r);
return r;
}
}...
2014 May 14
17
[RFC PATCH v1 00/16] Convert all ttm drivers to use the new reservation interface
This series depends on the previously posted reservation api patches.
2 of them are not yet in for-next-fences branch of
git://git.linaro.org/people/sumit.semwal/linux-3.x.git
The missing patches are still in my vmwgfx_wip branch at
git://people.freedesktop.org/~mlankhorst/linux
All ttm drivers are converted to the fence api, fence_lock is removed
and rcu is used in its place.
qxl is the first
2014 Jul 31
19
[PATCH 01/19] fence: add debugging lines to fence_is_signaled for the callback
fence_is_signaled callback should support being run in
atomic context, but not in irq context.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst at canonical.com>
---
include/linux/fence.h | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/include/linux/fence.h b/include/linux/fence.h
index d174585b874b..c1a4519ba2f5 100644
---
2014 Jul 09
22
[PATCH 00/17] Convert TTM to the new fence interface.
This series applies on top of the driver-core-next branch of
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git
Before converting ttm to the new fence interface I had to fix some
drivers to require a reservation before poking with fence_obj.
After flipping the switch RCU becomes available instead, and
the extra reservations can be dropped again. :-)
I've done at least basic
2017 Feb 28
8
[PATCH 2/2] gpu: drm: Convert printk(KERN_<LEVEL> to pr_<level>
...static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> }
> if (!domain) {
> /* Do nothings */
> - printk(KERN_WARNING "Set domain without domain !\n");
> + pr_warn("Set domain without domain !\n");
> return 0;
> }
> if (domain == RADEON_GEM_DOMAIN_CPU) {
> @@ -116,7 +116,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj,
> r = -EBUSY;
>
> if (r < 0 && r != -EINTR) {
> - printk(KERN_ERR "Failed to wait for object: %li\n", r);
> + pr_err("Failed to wait for object: %li\n"...