Displaying 8 results from an estimated 8 matches for "mr_invalidate_seq".
2019 Nov 23
1
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...Lets drop the comment, I'm noto sure wake_up_q is even a function this
> layer should be calling.
Actually, I think you can remove the "need_wake" variable since it is
unconditionally set to "true".
Also, the comment in__mmu_interval_notifier_insert() says
"mni->mr_invalidate_seq" and I think that should be
"mni->invalidate_seq".
2019 Nov 13
2
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
> +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni,
> + struct mm_struct *mm, unsigned long start,
> + unsigned long length,
> + const struct mmu_interval_notifier_ops *ops);
> +int mmu_interval_notifier_insert_locked(
> + struct mmu_interval_notifier *mni, struct mm_struct *mm,
> + unsigned long start, unsigned long length,
> + const struct
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...are not allowed to change
> > + * it. Retrying until invalidation is done is tricky due to the
> > + * possibility for live lock, instead defer the add to the unlock so
> > + * this algorithm is deterministic.
> > + *
> > + * In all cases the value for the mrn->mr_invalidate_seq should be
> > + * odd, see mmu_range_read_begin()
> > + */
> > + spin_lock(&mmn_mm->lock);
> > + if (mmn_mm->active_invalidate_ranges) {
> > + if (mn_itree_is_invalidating(mmn_mm))
> > + hlist_add_head(&mrn->deferred_item,
> > +...
2019 Oct 28
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...d.
+ *
+ * If the itree is invalidating then we are not allowed to change
+ * it. Retrying until invalidation is done is tricky due to the
+ * possibility for live lock, instead defer the add to the unlock so
+ * this algorithm is deterministic.
+ *
+ * In all cases the value for the mrn->mr_invalidate_seq should be
+ * odd, see mmu_range_read_begin()
+ */
+ spin_lock(&mmn_mm->lock);
+ if (mmn_mm->active_invalidate_ranges) {
+ if (mn_itree_is_invalidating(mmn_mm))
+ hlist_add_head(&mrn->deferred_item,
+ &mmn_mm->deferred_list);
+ else {
+ mmn_mm->invalidat...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...+ * If the itree is invalidating then we are not allowed to change
+ * it. Retrying until invalidation is done is tricky due to the
+ * possibility for live lock, instead defer the add to
+ * mn_itree_inv_end() so this algorithm is deterministic.
+ *
+ * In all cases the value for the mni->mr_invalidate_seq should be
+ * odd, see mmu_interval_read_begin()
+ */
+ spin_lock(&mmn_mm->lock);
+ if (mmn_mm->active_invalidate_ranges) {
+ if (mn_itree_is_invalidating(mmn_mm))
+ hlist_add_head(&mni->deferred_item,
+ &mmn_mm->deferred_list);
+ else {
+ mmn_mm->invali...
2019 Nov 07
5
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...is invalidating then we are not allowed to change
> + * it. Retrying until invalidation is done is tricky due to the
> + * possibility for live lock, instead defer the add to the unlock so
> + * this algorithm is deterministic.
> + *
> + * In all cases the value for the mrn->mr_invalidate_seq should be
> + * odd, see mmu_range_read_begin()
> + */
> + spin_lock(&mmn_mm->lock);
> + if (mmn_mm->active_invalidate_ranges) {
> + if (mn_itree_is_invalidating(mmn_mm))
> + hlist_add_head(&mrn->deferred_item,
> + &mmn_mm->deferred_list);...
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others