Displaying 13 results from an estimated 13 matches for "inv_end".
2019 Nov 13
2
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...nt mmu_interval_notifier_insert_locked(
> + struct mmu_interval_notifier *mni, struct mm_struct *mm,
> + unsigned long start, unsigned long length,
> + const struct mmu_interval_notifier_ops *ops);
Very inconsistent indentation between these two related functions.
> + /*
> + * The inv_end incorporates a deferred mechanism like
> + * rtnl_unlock(). Adds and removes are queued until the final inv_end
> + * happens then they are progressed. This arrangement for tree updates
> + * is used to avoid using a blocking lock during
> + * invalidate_range_start.
Nitpick: That...
2019 Nov 23
1
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...ry inconsistent indentation between these two related functions.
>
> clang-format.. The kernel config is set to prefer a line up under the
> ( if all the arguments will fit within the 80 cols otherwise it does a
> 1 tab continuation indent.
>
>>> + /*
>>> + * The inv_end incorporates a deferred mechanism like
>>> + * rtnl_unlock(). Adds and removes are queued until the final inv_end
>>> + * happens then they are progressed. This arrangement for tree updates
>>> + * is used to avoid using a blocking lock during
>>> + * invalid...
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...s not like you're short of space in that struct.
>
> Splitting it makes alot of stuff more complex and unnatural.
>
OK, agreed.
> The ops above could be put in inline wrappers, but they only occur
> only in functions already called mn_itree_inv_start_range() and
> mn_itree_inv_end() and mn_itree_is_invalidating().
>
> There is the one 'take the lock' outlier in
> __mmu_range_notifier_insert() though
>
>>> +static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
>>> +{
>>> + struct mmu_range_notifier *mrn;
>>> + s...
2019 Nov 07
5
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...s implies mnn->invalidate_seq & 1 == False in this case? If so,
let's say so. I'm probably getting that wrong, too.
> + * - some range on the mm_struct is being invalidated
> + * - the itree is allowed to change
> + *
> + * The later state avoids some expensive work on inv_end in the common case of
> + * no mrn monitoring the VA.
> + */
> +static bool mn_itree_is_invalidating(struct mmu_notifier_mm *mmn_mm)
> +{
> + lockdep_assert_held(&mmn_mm->lock);
> + return mmn_mm->invalidate_seq & 1;
> +}
> +
> +static struct mmu_range_notif...
2019 Nov 07
1
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...s case? If so,
> let's say so. I'm probably getting that wrong, too.
Yes that is right, done
>
> > + * - some range on the mm_struct is being invalidated
> > + * - the itree is allowed to change
> > + *
> > + * The later state avoids some expensive work on inv_end in the common case of
> > + * no mrn monitoring the VA.
> > + */
> > +static bool mn_itree_is_invalidating(struct mmu_notifier_mm *mmn_mm)
> > +{
> > + lockdep_assert_held(&mmn_mm->lock);
> > + return mmn_mm->invalidate_seq & 1;
> > +}
> &g...
2019 Nov 13
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...tifier_ops *ops);
>
> Very inconsistent indentation between these two related functions.
clang-format.. The kernel config is set to prefer a line up under the
( if all the arguments will fit within the 80 cols otherwise it does a
1 tab continuation indent.
> > + /*
> > + * The inv_end incorporates a deferred mechanism like
> > + * rtnl_unlock(). Adds and removes are queued until the final inv_end
> > + * happens then they are progressed. This arrangement for tree updates
> > + * is used to avoid using a blocking lock during
> > + * invalidate_range_st...
2020 Jan 13
0
[PATCH v6 2/6] mm/mmu_notifier: add mmu_interval_notifier_put()
...erval_notifier_put(struct mmu_interval_notifier *mni);
/**
* mmu_interval_set_seq - Save the invalidation sequence
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index a5ff19cd1bc5..40c837ae8d90 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -129,6 +129,7 @@ static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
{
struct mmu_interval_notifier *mni;
struct hlist_node *next;
+ struct hlist_head removed_list;
spin_lock(&mmn_mm->lock);
if (--mmn_mm->active_invalidate_ranges ||
@@ -144,20 +145,35 @@ static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)...
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...; let's say so. I'm probably getting that wrong, too.
Yes (mnn->invalidate_seq & 1) == 0
>
> > + * - some range on the mm_struct is being invalidated
> > + * - the itree is allowed to change
> > + *
> > + * The later state avoids some expensive work on inv_end in the common case of
> > + * no mrn monitoring the VA.
> > + */
> > +static bool mn_itree_is_invalidating(struct mmu_notifier_mm *mmn_mm)
> > +{
> > + lockdep_assert_held(&mmn_mm->lock);
> > + return mmn_mm->invalidate_seq & 1;
> > +}
> &g...
2019 Oct 28
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...he mm_struct is being invalidated
+ * - the itree is not allowed to change
+ *
+ * And partially excluded:
+ * - mm->active_invalidate_ranges != 0
+ * - some range on the mm_struct is being invalidated
+ * - the itree is allowed to change
+ *
+ * The later state avoids some expensive work on inv_end in the common case of
+ * no mrn monitoring the VA.
+ */
+static bool mn_itree_is_invalidating(struct mmu_notifier_mm *mmn_mm)
+{
+ lockdep_assert_held(&mmn_mm->lock);
+ return mmn_mm->invalidate_seq & 1;
+}
+
+static struct mmu_range_notifier *
+mn_itree_inv_start_range(struct mmu_no...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...lidated
+ * - the itree is allowed to change
+ *
+ * Operations on mmu_notifier_mm->invalidate_seq (under spinlock):
+ * seq |= 1 # Begin writing
+ * seq++ # Release the writing state
+ * seq & 1 # True if a writer exists
+ *
+ * The later state avoids some expensive work on inv_end in the common case of
+ * no mni monitoring the VA.
+ */
+static bool mn_itree_is_invalidating(struct mmu_notifier_mm *mmn_mm)
+{
+ lockdep_assert_held(&mmn_mm->lock);
+ return mmn_mm->invalidate_seq & 1;
+}
+
+static struct mmu_interval_notifier *
+mn_itree_inv_start_range(struct mmu...
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to
allow device drivers with MMUs to dynamically mirror a process' page
tables based on device faults and invalidation callbacks. The Nouveau
driver is updated to use the extended API and a set of stand alone self
tests is added to help validate and maintain correctness.
The patches are based on linux-5.5.0-rc6 and are for