Displaying 16 results from an estimated 16 matches for "mmu_interval_read_retry".
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...seq - Save the invalidation sequence
+ * @mni - The mni passed to invalidate
+ * @cur_seq - The cur_seq passed to the invalidate() callback
+ *
+ * This must be called unconditionally from the invalidate callback of a
+ * struct mmu_interval_notifier_ops under the same lock that is used to call
+ * mmu_interval_read_retry(). It updates the sequence number for later use by
+ * mmu_interval_read_retry(). The provided cur_seq will always be odd.
+ *
+ * If the caller does not call mmu_interval_read_begin() or
+ * mmu_interval_read_retry() then this call is not required.
+ */
+static inline void mmu_interval_set_seq(str...
2020 Apr 22
1
[PATCH hmm 2/5] mm/hmm: make hmm_range_fault return 0 or -1
...truct amdgpu_bo *bo, struct page **pages)
> down_read(&mm->mmap_sem);
> r = hmm_range_fault(range);
> up_read(&mm->mmap_sem);
> - if (unlikely(r <= 0)) {
> + if (unlikely(r)) {
> /*
> * FIXME: This timeout should encompass the retry from
> * mmu_interval_read_retry() as well.
> */
> - if ((r == 0 || r == -EBUSY) && !time_after(jiffies, timeout))
> + if ((r == -EBUSY) && !time_after(jiffies, timeout))
Please also kill the superflous inner braces here.
> + * Return: 0 or -ERRNO with one of the following status codes:
Maybe s...
2020 Jan 14
2
[PATCH v6 4/6] mm/mmu_notifier: add mmu_interval_notifier_find()
On Mon, Jan 13, 2020 at 02:47:01PM -0800, Ralph Campbell wrote:
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 47ad9cc89aab..4efecc0f13cb 100644
> +++ b/mm/mmu_notifier.c
> @@ -1171,6 +1171,39 @@ void mmu_interval_notifier_update(struct mmu_interval_notifier *mni,
> }
> EXPORT_SYMBOL_GPL(mmu_interval_notifier_update);
>
> +struct mmu_interval_notifier
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2020 Jan 15
0
[PATCH v6 4/6] mm/mmu_notifier: add mmu_interval_notifier_find()
...ter.
>
> .. and this poses all sorts of questions about consistency with items
> on the deferred list. Should find return an item undergoing deletion?
I don't think so. The deferred operations are all complete when
mmu_interval_read_begin() returns, and the sequence number check
with mmu_interval_read_retry() guarantees there have been no changes
while not holding the driver page table lock and calling hmm_range_fault().
> Should find return items using the old interval tree values or their
> new updated values?
>
> Jason
I think it should only look at the old interval tree and not the...
2019 Nov 12
0
[PATCH v3 12/14] drm/amdgpu: Use mmu_interval_notifier instead of hmm_mirror
...nge->notifier_seq = mmu_interval_read_begin(&bo->notifier);
+
+ down_read(&mm->mmap_sem);
r = hmm_range_fault(range, 0);
up_read(&mm->mmap_sem);
-
- if (unlikely(r < 0))
+ if (unlikely(r <= 0)) {
+ /*
+ * FIXME: This timeout should encompass the retry from
+ * mmu_interval_read_retry() as well.
+ */
+ if ((r == 0 || r == -EBUSY) && !time_after(jiffies, timeout))
+ goto retry;
goto out_free_pfns;
+ }
for (i = 0; i < ttm->num_pages; i++) {
- pages[i] = hmm_device_entry_to_page(range, pfns[i]);
+ /* FIXME: The pages cannot be touched outside the notifie...
2025 Jan 24
3
[PATCH v1 0/2] nouveau/svm: fix + cleanup for nouveau_atomic_range_fault()
One fix and a minor cleanup.
Only compile-tested due to lack of HW, so I'd be happy if someone with
access to HW could test. But not sure how easy this is to trigger. Likely
some concurrent MADV_DONTNEED on the PTE we just converted might be able
to trigger it.
Cc: Karol Herbst <kherbst at redhat.com>
Cc: Lyude Paul <lyude at redhat.com>
Cc: Danilo Krummrich <dakr at
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com>
The API is a bit complicated for the uses we actually have, and
disucssions for simplifying have come up a number of times.
This small series removes the customizable pfn format and simplifies the
return code of hmm_range_fault()
All the drivers are adjusted to process in the simplified format.
I would appreciated tested-by's for the two
2020 Apr 22
0
[PATCH hmm 2/5] mm/hmm: make hmm_range_fault return 0 or -1
...12 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
down_read(&mm->mmap_sem);
r = hmm_range_fault(range);
up_read(&mm->mmap_sem);
- if (unlikely(r <= 0)) {
+ if (unlikely(r)) {
/*
* FIXME: This timeout should encompass the retry from
* mmu_interval_read_retry() as well.
*/
- if ((r == 0 || r == -EBUSY) && !time_after(jiffies, timeout))
+ if ((r == -EBUSY) && !time_after(jiffies, timeout))
goto retry;
goto out_free_pfns;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
index 645fe...
2020 May 01
0
[PATCH hmm v2 2/5] mm/hmm: make hmm_range_fault return 0 or -1
...12 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
down_read(&mm->mmap_sem);
r = hmm_range_fault(range);
up_read(&mm->mmap_sem);
- if (unlikely(r <= 0)) {
+ if (unlikely(r)) {
/*
* FIXME: This timeout should encompass the retry from
* mmu_interval_read_retry() as well.
*/
- if ((r == 0 || r == -EBUSY) && !time_after(jiffies, timeout))
+ if (r == -EBUSY && !time_after(jiffies, timeout))
goto retry;
goto out_free_pfns;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
index 645fedd...
2019 Nov 12
0
[PATCH v3 13/14] mm/hmm: remove hmm_mirror and related
...goto again;
- }
- hmm_range_unregister(&range);
+ if (ret == -EBUSY)
+ goto again;
return ret;
}
+ up_read(&mm->mmap_sem);
+
take_lock(driver->update);
- if (!hmm_range_valid(&range)) {
+ if (mmu_interval_read_retry(&ni, range.notifier_seq) {
release_lock(driver->update);
- up_read(&mm->mmap_sem);
goto again;
}
- // Use pfns array content to update device page table
+ /* Use pfns array content to update device page table,
+ * under the updat...
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...m_pfns = kvmalloc_array(ttm->num_pages,
+ sizeof(*range->hmm_pfns), GFP_KERNEL);
+ if (unlikely(!range->hmm_pfns)) {
r = -ENOMEM;
goto out_free_ranges;
}
@@ -867,7 +853,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
* the notifier_lock, and mmu_interval_read_retry() must be done first.
*/
for (i = 0; i < ttm->num_pages; i++)
- pages[i] = hmm_device_entry_to_page(range, range->pfns[i]);
+ pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]);
gtt->range = range;
mmput(mm);
@@ -877,7 +863,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu...
2020 May 01
0
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...m_pfns = kvmalloc_array(ttm->num_pages,
+ sizeof(*range->hmm_pfns), GFP_KERNEL);
+ if (unlikely(!range->hmm_pfns)) {
r = -ENOMEM;
goto out_free_ranges;
}
@@ -867,7 +853,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
* the notifier_lock, and mmu_interval_read_retry() must be done first.
*/
for (i = 0; i < ttm->num_pages; i++)
- pages[i] = hmm_device_entry_to_page(range, range->pfns[i]);
+ pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]);
gtt->range = range;
mmput(mm);
@@ -877,7 +863,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu...
2020 May 01
13
[PATCH hmm v2 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com>
The API is a bit complicated for the uses we actually have, and
disucssions for simplifying have come up a number of times.
This small series removes the customizable pfn format and simplifies the
return code of hmm_range_fault()
All the drivers are adjusted to process in the simplified format.
I would appreciated tested-by's for the two
2020 Apr 22
1
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
..._pages,
> + sizeof(*range->hmm_pfns), GFP_KERNEL);
> + if (unlikely(!range->hmm_pfns)) {
> r = -ENOMEM;
> goto out_free_ranges;
> }
> @@ -867,7 +853,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages)
> * the notifier_lock, and mmu_interval_read_retry() must be done first.
> */
> for (i = 0; i < ttm->num_pages; i++)
> - pages[i] = hmm_device_entry_to_page(range, range->pfns[i]);
> + pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]);
>
> gtt->range = range;
> mmput(mm);
> @@ -877,7 +863,7 @@ int am...
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to
allow device drivers with MMUs to dynamically mirror a process' page
tables based on device faults and invalidation callbacks. The Nouveau
driver is updated to use the extended API and a set of stand alone self
tests is added to help validate and maintain correctness.
The patches are based on linux-5.5.0-rc6 and are for