Displaying 16 results from an estimated 16 matches for "itree".
Did you mean:
tree
2019 Oct 28
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
.../* all mmu notifiers registered in this mm are queued in this list */
struct hlist_head list;
+ bool has_interval;
/* to serialize the list modifications and hlist_unhashed */
spinlock_t lock;
+ unsigned long invalidate_seq;
+ unsigned long active_invalidate_ranges;
+ struct rb_root_cached itree;
+ wait_queue_head_t wq;
+ struct hlist_head deferred_list;
};
+/*
+ * This is a collision-retry read-side/write-side 'lock', a lot like a
+ * seqcount, however this allows multiple write-sides to hold it at
+ * once. Conceptually the write side is protecting the values of the PTEs in
+...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
.../srcu.h>
#include <linux/rcupdate.h>
#include <linux/sched.h>
@@ -36,10 +37,253 @@ struct lockdep_map __mmu_notifier_invalidate_range_start_map = {
struct mmu_notifier_mm {
/* all mmu notifiers registered in this mm are queued in this list */
struct hlist_head list;
+ bool has_itree;
/* to serialize the list modifications and hlist_unhashed */
spinlock_t lock;
+ unsigned long invalidate_seq;
+ unsigned long active_invalidate_ranges;
+ struct rb_root_cached itree;
+ wait_queue_head_t wq;
+ struct hlist_head deferred_list;
};
+/*
+ * This is a collision-retry read-side/wr...
2020 Jan 13
0
[PATCH v6 3/6] mm/notifier: add mmu_interval_notifier_update()
...fier *mni,
+ unsigned long start, unsigned long last);
/**
* mmu_interval_set_seq - Save the invalidation sequence
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 40c837ae8d90..47ad9cc89aab 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -157,7 +157,14 @@ static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
else {
interval_tree_remove(&mni->interval_tree,
&mmn_mm->itree);
- if (mni->ops->release)
+ if (mni->updated_last) {
+ mni->interval_tree.start = mni->updated_start;
+ mni->interval_tree.last = mni-...
2019 Nov 07
5
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...in this mm are queued in this list */
> struct hlist_head list;
> + bool has_interval;
> /* to serialize the list modifications and hlist_unhashed */
> spinlock_t lock;
> + unsigned long invalidate_seq;
> + unsigned long active_invalidate_ranges;
> + struct rb_root_cached itree;
> + wait_queue_head_t wq;
> + struct hlist_head deferred_list;
> };
>
> +/*
> + * This is a collision-retry read-side/write-side 'lock', a lot like a
> + * seqcount, however this allows multiple write-sides to hold it at
> + * once. Conceptually the write side is...
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...invalidate_range_start()/end() in parallel
> > + * on multiple CPUs. This is designed to not reduce concurrency or block
> > + * progress on the mm side.
> > + *
> > + * As a secondary function, holding the full write side also serves to prevent
> > + * writers for the itree, this is an optimization to avoid extra locking
> > + * during invalidate_range_start/end notifiers.
> > + *
> > + * The write side has two states, fully excluded:
> > + * - mm->active_invalidate_ranges != 0
> > + * - mnn->invalidate_seq & 1 == True
> &g...
2019 Nov 07
1
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...just this once? :)
Haha, sure, why not
> > + * The write side has two states, fully excluded:
> > + * - mm->active_invalidate_ranges != 0
> > + * - mnn->invalidate_seq & 1 == True
> > + * - some range on the mm_struct is being invalidated
> > + * - the itree is not allowed to change
> > + *
> > + * And partially excluded:
> > + * - mm->active_invalidate_ranges != 0
>
> I assume this implies mnn->invalidate_seq & 1 == False in this case? If so,
> let's say so. I'm probably getting that wrong, too.
Yes that...
2020 Jan 14
2
[PATCH v6 4/6] mm/mmu_notifier: add mmu_interval_notifier_find()
...+{
> + struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm;
> + struct interval_tree_node *node;
> + struct mmu_interval_notifier *mni;
> + struct mmu_interval_notifier *res = NULL;
> +
> + spin_lock(&mmn_mm->lock);
> + node = interval_tree_iter_first(&mmn_mm->itree, start, last);
> + if (node) {
> + mni = container_of(node, struct mmu_interval_notifier,
> + interval_tree);
> + while (true) {
> + if (mni->ops == ops) {
> + res = mni;
> + break;
> + }
> + node = interval_tree_iter_next(&mni->interval_tree...
2020 Jan 13
0
[PATCH v6 2/6] mm/mmu_notifier: add mmu_interval_notifier_put()
...mu_interval_notifier_put(struct mmu_interval_notifier *mni);
/**
* mmu_interval_set_seq - Save the invalidation sequence
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index a5ff19cd1bc5..40c837ae8d90 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -129,6 +129,7 @@ static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
{
struct mmu_interval_notifier *mni;
struct hlist_node *next;
+ struct hlist_head removed_list;
spin_lock(&mmn_mm->lock);
if (--mmn_mm->active_invalidate_ranges ||
@@ -144,20 +145,35 @@ static void mn_itree_inv_end(struct mmu_notifier_mm...
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...lidating
>> member variable? It's not like you're short of space in that struct.
>
> Splitting it makes alot of stuff more complex and unnatural.
>
OK, agreed.
> The ops above could be put in inline wrappers, but they only occur
> only in functions already called mn_itree_inv_start_range() and
> mn_itree_inv_end() and mn_itree_is_invalidating().
>
> There is the one 'take the lock' outlier in
> __mmu_range_notifier_insert() though
>
>>> +static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
>>> +{
>>> + stru...
2020 Jan 13
0
[PATCH v6 4/6] mm/mmu_notifier: add mmu_interval_notifier_find()
...ed long start, unsigned long last)
+{
+ struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm;
+ struct interval_tree_node *node;
+ struct mmu_interval_notifier *mni;
+ struct mmu_interval_notifier *res = NULL;
+
+ spin_lock(&mmn_mm->lock);
+ node = interval_tree_iter_first(&mmn_mm->itree, start, last);
+ if (node) {
+ mni = container_of(node, struct mmu_interval_notifier,
+ interval_tree);
+ while (true) {
+ if (mni->ops == ops) {
+ res = mni;
+ break;
+ }
+ node = interval_tree_iter_next(&mni->interval_tree,
+ start, last);
+ if (!node)...
2020 Jan 15
0
[PATCH v6 4/6] mm/mmu_notifier: add mmu_interval_notifier_find()
...ifier_mm *mmn_mm = mm->mmu_notifier_mm;
>> + struct interval_tree_node *node;
>> + struct mmu_interval_notifier *mni;
>> + struct mmu_interval_notifier *res = NULL;
>> +
>> + spin_lock(&mmn_mm->lock);
>> + node = interval_tree_iter_first(&mmn_mm->itree, start, last);
>> + if (node) {
>> + mni = container_of(node, struct mmu_interval_notifier,
>> + interval_tree);
>> + while (true) {
>> + if (mni->ops == ops) {
>> + res = mni;
>> + break;
>> + }
>> + node = interval_tree_...
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
...ilable on my github:
https://github.com/jgunthorpe/linux/commits/mmu_notifier
v2 changes:
- Add mmu_range_set_seq() to set the mrn sequence number under the driver
lock and make the locking more understandable
- Add some additional comments around locking/READ_ONCe
- Make the WARN_ON flow in mn_itree_invalidate a bit easier to follow
- Fix wrong WARN_ON
Jason Gunthorpe (15):
mm/mmu_notifier: define the header pre-processor parts even if
disabled
mm/mmu_notifier: add an interval tree notifier
mm/hmm: allow hmm_range to be used with a mmu_range_notifier or
hmm_mirror
mm/hmm: defi...
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to
allow device drivers with MMUs to dynamically mirror a process' page
tables based on device faults and invalidation callbacks. The Nouveau
driver is updated to use the extended API and a set of stand alone self
tests is added to help validate and maintain correctness.
The patches are based on linux-5.5.0-rc6 and are for
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2019 Nov 07
2
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...read:
>
> * mrn->invalidate_seq is always, yes always, set to an odd value. This ensures
>
> To stress that it is not an error.
I went with this:
/*
* mrn->invalidate_seq must always be set to an odd value via
* mmu_range_set_seq() using the provided cur_seq from
* mn_itree_inv_start_range(). This ensures that if seq does wrap we
* will always clear the below sleep in some reasonable time as
* mmn_mm->invalidate_seq is even in the idle state.
*/
> > > + spin_lock(&mmn_mm->lock);
> > > + if (mmn_mm->active_invalidate_ranges) {
>...
2023 Jun 21
3
[PATCH 00/79] fs: new accessors for inode->i_ctime
...| 2 +-
fs/kernfs/inode.c | 4 +-
fs/libfs.c | 32 +++++------
fs/minix/bitmap.c | 2 +-
fs/minix/dir.c | 6 +--
fs/minix/inode.c | 11 ++--
fs/minix/itree_common.c | 4 +-
fs/minix/namei.c | 6 +--
fs/nfs/callback_proc.c | 2 +-
fs/nfs/fscache.h | 4 +-
fs/nfs/inode.c | 21 ++++----
fs/nfsd/nfsctl.c | 2 +-
f...