Displaying 18 results from an estimated 18 matches for "invalidate_seq".
2019 Nov 07
5
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...which already has, for example:
struct mmu_notifier_range
...and you're adding:
struct mmu_range_notifier
...so I'll try to help sort that out.
2. I'm also seeing a couple of things that are really hard for the reader
verify are correct (abuse and battery of the low bit in .invalidate_seq,
for example, haha), so I have some recommendations there.
3. Documentation improvements, which easy to apply, with perhaps one exception.
(Here, because this a complicated area, documentation does make a difference,
so it's worth a little extra fuss.)
4. Other nits that don't matter too...
2019 Nov 07
1
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...r_range *range,
> > + unsigned long cur_seq);
> > +};
> > +
> > +struct mmu_range_notifier {
> > + struct interval_tree_node interval_tree;
> > + const struct mmu_range_notifier_ops *ops;
> > + struct hlist_node deferred_item;
> > + unsigned long invalidate_seq;
> > + struct mm_struct *mm;
> > +};
> > +
>
> Again, now we have the new struct mmu_range_notifier, and the old
> struct mmu_notifier_range, and it's not good.
>
> Ideas:
>
> a) Live with it.
>
> b) (Discarded, too many callers): rename old one...
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...r_range *range,
> > + unsigned long cur_seq);
> > +};
> > +
> > +struct mmu_range_notifier {
> > + struct interval_tree_node interval_tree;
> > + const struct mmu_range_notifier_ops *ops;
> > + struct hlist_node deferred_item;
> > + unsigned long invalidate_seq;
> > + struct mm_struct *mm;
> > +};
> > +
>
> Again, now we have the new struct mmu_range_notifier, and the old
> struct mmu_notifier_range, and it's not good.
>
> Ideas:
>
> a) Live with it.
>
> b) (Discarded, too many callers): rename old one...
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...+
>>> + spin_lock(&mmn_mm->lock);
>>> + if (--mmn_mm->active_invalidate_ranges ||
>>> + !mn_itree_is_invalidating(mmn_mm)) {
>>> + spin_unlock(&mmn_mm->lock);
>>> + return;
>>> + }
>>> +
>>> + mmn_mm->invalidate_seq++;
>>
>> Is this the right place for an assertion that this is now an even value?
>
> Yes, but I'm reluctant to add such a runtime check on this fastish path..
> How about a comment?
Sure.
>
>>> + need_wake = true;
>>> +
>>> + /*
>>&g...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...releases them when all
invalidates are done, via active_invalidate_ranges count.
This approach avoids having to intersect the interval tree twice (as
umem_odp does) at the potential cost of a longer device page fault.
- kvm/umem_odp use a sequence counter to drive the collision retry,
via invalidate_seq
- a deferred work todo list on unlock scheme like RTNL, via deferred_list.
This makes adding/removing interval tree members more deterministic
- seqlock, except this version makes the seqlock idea multi-holder on the
write side by protecting it with active_invalidate_ranges and a spinlock
To...
2019 Oct 28
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...releases them when all
invalidates are done, via active_invalidate_ranges count.
This approach avoids having to intersect the interval tree twice (as
umem_odp does) at the potential cost of a longer device page fault.
- kvm/umem_odp use a sequence counter to drive the collision retry,
via invalidate_seq
- a deferred work todo list on unlock scheme like RTNL, via deferred_list.
This makes adding/removing interval tree members more deterministic
- seqlock, except this version makes the seqlock idea multi-holder on the
write side by protecting it with active_invalidate_ranges and a spinlock
To...
2019 Nov 07
2
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...r_seq);
> No it is always odd, you must call mmu_range_set_seq() only from the
> op->invalidate_range() callback at which point the seq is odd. As well
> when mrn is added and its seq first set it is set to an odd value
> always. Maybe the comment, should read:
>
> * mrn->invalidate_seq is always, yes always, set to an odd value. This ensures
>
> To stress that it is not an error.
I went with this:
/*
* mrn->invalidate_seq must always be set to an odd value via
* mmu_range_set_seq() using the provided cur_seq from
* mn_itree_inv_start_range(). This ensures that...
2019 Nov 23
1
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...s drop the comment, I'm noto sure wake_up_q is even a function this
> layer should be calling.
Actually, I think you can remove the "need_wake" variable since it is
unconditionally set to "true".
Also, the comment in__mmu_interval_notifier_insert() says
"mni->mr_invalidate_seq" and I think that should be
"mni->invalidate_seq".
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...ways odd, you must call mmu_range_set_seq() only from the
> > op->invalidate_range() callback at which point the seq is odd. As well
> > when mrn is added and its seq first set it is set to an odd value
> > always. Maybe the comment, should read:
> >
> > * mrn->invalidate_seq is always, yes always, set to an odd value. This ensures
> >
> > To stress that it is not an error.
>
> I went with this:
>
> /*
> * mrn->invalidate_seq must always be set to an odd value via
> * mmu_range_set_seq() using the provided cur_seq from
> * m...
2020 Jan 13
0
[PATCH v6 3/6] mm/notifier: add mmu_interval_notifier_update()
...t a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 6dcaa632eef7..0ce59b4f22c2 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -251,6 +251,8 @@ struct mmu_interval_notifier {
struct mm_struct *mm;
struct hlist_node deferred_item;
unsigned long invalidate_seq;
+ unsigned long updated_start;
+ unsigned long updated_last;
};
#ifdef CONFIG_MMU_NOTIFIER
@@ -310,6 +312,8 @@ int mmu_interval_notifier_insert_safe(
const struct mmu_interval_notifier_ops *ops);
void mmu_interval_notifier_remove(struct mmu_interval_notifier *mni);
void mmu_interval_notifi...
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2020 Jan 13
0
[PATCH v6 5/6] nouveau: use new mmu interval notifiers
..._range *range,
+ unsigned long cur_seq)
{
- struct svm_notifier *sn =
- container_of(mni, struct svm_notifier, notifier);
+ struct svmm_interval *smi =
+ container_of(mni, struct svmm_interval, notifier);
+ struct nouveau_svmm *svmm = smi->svmm;
/*
- * serializes the update to mni->invalidate_seq done by caller and
+ * Serializes the update to mni->invalidate_seq done by the caller and
* prevents invalidation of the PTE from progressing while HW is being
- * programmed. This is very hacky and only works because the normal
- * notifier that does invalidation is always called after t...
2020 Jan 13
0
[PATCH v6 2/6] mm/mmu_notifier: add mmu_interval_notifier_put()
..._put(mni);
+
/*
* The possible sleep on progress in the invalidation requires the
* caller not hold any locks held by invalidation callbacks.
@@ -1053,11 +1080,34 @@ void mmu_interval_notifier_remove(struct mmu_interval_notifier *mni)
wait_event(mmn_mm->wq,
READ_ONCE(mmn_mm->invalidate_seq) != seq);
- /* pairs with mmgrab in mmu_interval_notifier_insert() */
- mmdrop(mm);
+ /* pairs with mmgrab() in __mmu_interval_notifier_insert() */
+ if (!mni->ops->release)
+ mmdrop(mm);
}
EXPORT_SYMBOL_GPL(mmu_interval_notifier_remove);
+/**
+ * mmu_interval_notifier_put - Unregister...
2019 Oct 29
1
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...nvalidates are done, via active_invalidate_ranges count.
> This approach avoids having to intersect the interval tree twice (as
> umem_odp does) at the potential cost of a longer device page fault.
>
> - kvm/umem_odp use a sequence counter to drive the collision retry,
> via invalidate_seq
>
> - a deferred work todo list on unlock scheme like RTNL, via deferred_list.
> This makes adding/removing interval tree members more deterministic
>
> - seqlock, except this version makes the seqlock idea multi-holder on the
> write side by protecting it with active_invali...
2019 Nov 13
2
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
> +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni,
> + struct mm_struct *mm, unsigned long start,
> + unsigned long length,
> + const struct mmu_interval_notifier_ops *ops);
> +int mmu_interval_notifier_insert_locked(
> + struct mmu_interval_notifier *mni, struct mm_struct *mm,
> + unsigned long start, unsigned long length,
> + const struct
2019 Oct 15
0
[PATCH hmm 11/15] nouveau: use mmu_range_notifier instead of hmm_mirror
...nvalidate(struct mmu_range_notifier *mrn,
+ const struct mmu_notifier_range *range)
{
- bool ret = hmm_range_valid(range);
+ struct svm_notifier *sn =
+ container_of(mrn, struct svm_notifier, notifier);
- hmm_range_unregister(range);
- return ret;
+ /*
+ * serializes the update to mrn->invalidate_seq done by caller and
+ * prevents invalidation of the PTE from progressing while HW is being
+ * programmed. This is very hacky and only works because the normal
+ * notifier that does invalidation is always called after the range
+ * notifier.
+ */
+ if (mmu_notifier_range_blockable(range))
+...
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to
allow device drivers with MMUs to dynamically mirror a process' page
tables based on device faults and invalidation callbacks. The Nouveau
driver is updated to use the extended API and a set of stand alone self
tests is added to help validate and maintain correctness.
The patches are based on linux-5.5.0-rc6 and are for