search for: amdgpu_ttm_tt_affect_userptr

Displaying 10 results from an estimated 10 matches for "amdgpu_ttm_tt_affect_userptr".

2019 Oct 28
2
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...r *mrn, + const struct mmu_notifier_range *range) { - struct amdgpu_bo *bo; + struct amdgpu_bo *bo = container_of(mrn, struct amdgpu_bo, notifier); + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); long r; - list_for_each_entry(bo, &node->bos, mn_list) { - - if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, start, end)) - continue; - - r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, - true, false, MAX_SCHEDULE_TIMEOUT); - if (r <= 0) - DRM_ERROR("(%ld) failed to wait for user bo\n", r); - } + /* FIXME: Is this necessary? */ + if (!amdgpu_ttm_tt_affect_userptr(b...
2019 Oct 29
0
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...e *range) > { > - struct amdgpu_bo *bo; > + struct amdgpu_bo *bo = container_of(mrn, struct amdgpu_bo, notifier); > + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); > long r; > > - list_for_each_entry(bo, &node->bos, mn_list) { > - > - if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, start, end)) > - continue; > - > - r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, > - true, false, MAX_SCHEDULE_TIMEOUT); > - if (r <= 0) > - DRM_ERROR("(%ld) failed to wait for user bo\n", r); > - } > + /* FIXME: Is this necessary?...
2019 Oct 29
0
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...e *range) > { > - struct amdgpu_bo *bo; > + struct amdgpu_bo *bo = container_of(mrn, struct amdgpu_bo, notifier); > + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); > long r; > > - list_for_each_entry(bo, &node->bos, mn_list) { > - > - if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, start, end)) > - continue; > - > - r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, > - true, false, MAX_SCHEDULE_TIMEOUT); > - if (r <= 0) > - DRM_ERROR("(%ld) failed to wait for user bo\n", r); > - } > + /* FIXME: Is this necessary?...
2019 Nov 01
2
[PATCH v2 14/15] drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
...->tbo.bdev); > long r; > > - /* > - * FIXME: Must hold some lock shared with > - * amdgpu_ttm_tt_get_user_pages_done() > - */ > - mmu_range_set_seq(mrn, cur_seq); > + mutex_lock(&adev->notifier_lock); > > - /* FIXME: Is this necessary? */ > - if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, range->start, > - range->end)) > - return true; > + mmu_range_set_seq(mrn, cur_seq); > > - if (!mmu_notifier_range_blockable(range)) > + if (!mmu_notifier_range_blockable(range)) { > + mutex_unlock(&adev->notifier_lock); > return fal...
2019 Oct 29
0
[PATCH v2 14/15] drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
...uct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); > long r; > > + /* > + * FIXME: Must hold some lock shared with > + * amdgpu_ttm_tt_get_user_pages_done() > + */ > + mmu_range_set_seq(mrn, cur_seq); > + > /* FIXME: Is this necessary? */ > if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, range->start, > range->end)) > @@ -119,11 +104,18 @@ static const struct mmu_range_notifier_ops amdgpu_mn_gfx_ops = { > * evicting all user-mode queues of the process. > */ > static bool amdgpu_mn_invalidate_hsa(struct mmu_range_notifier *mrn, &...
2019 Oct 28
1
[PATCH v2 14/15] drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
...= container_of(mrn, struct amdgpu_bo, notifier); struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); long r; + /* + * FIXME: Must hold some lock shared with + * amdgpu_ttm_tt_get_user_pages_done() + */ + mmu_range_set_seq(mrn, cur_seq); + /* FIXME: Is this necessary? */ if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, range->start, range->end)) @@ -119,11 +104,18 @@ static const struct mmu_range_notifier_ops amdgpu_mn_gfx_ops = { * evicting all user-mode queues of the process. */ static bool amdgpu_mn_invalidate_hsa(struct mmu_range_notifier *mrn, - const struct mmu_not...
2019 Nov 01
1
[PATCH v2 14/15] drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
...>> - /* >> - * FIXME: Must hold some lock shared with >> - * amdgpu_ttm_tt_get_user_pages_done() >> - */ >> - mmu_range_set_seq(mrn, cur_seq); >> + mutex_lock(&adev->notifier_lock); >> >> - /* FIXME: Is this necessary? */ >> - if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, range->start, >> - range->end)) >> - return true; >> + mmu_range_set_seq(mrn, cur_seq); >> >> - if (!mmu_notifier_range_blockable(range)) >> + if (!mmu_notifier_range_blockable(range)) { >> + mutex_unlock(&adev->notif...
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Oct 29
4
[PATCH v2 14/15] drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
On Tue, Oct 29, 2019 at 07:22:37PM +0000, Yang, Philip wrote: > Hi Jason, > > I did quick test after merging amd-staging-drm-next with the > mmu_notifier branch, which includes this set changes. The test result > has different failures, app stuck intermittently, GUI no display etc. I > am understanding the changes and will try to figure out the cause. Thanks! I'm not
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others