Displaying 11 results from an estimated 11 matches for "need_wak".
Did you mean:
need_wake
2019 Nov 23
1
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...be condensed into one less line:
>
> The rtnl_unlock can move up a line too. My editor is failing me on
> this.
>
>>> + /*
>>> + * TODO: Since we already have a spinlock above, this would be faster
>>> + * as wake_up_q
>>> + */
>>> + if (need_wake)
>>> + wake_up_all(&mmn_mm->wq);
>>
>> So why is this important enough for a TODO comment, but not important
>> enough to do right away?
>
> Lets drop the comment, I'm noto sure wake_up_q is even a function this
> layer should be calling.
Actually,...
2019 Nov 13
2
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...e final inv_end happens then
* they are progressed. This arrangement for tree updates is used to
* avoid using a blocking lock during invalidate_range_start.
*/
> + /*
> + * TODO: Since we already have a spinlock above, this would be faster
> + * as wake_up_q
> + */
> + if (need_wake)
> + wake_up_all(&mmn_mm->wq);
So why is this important enough for a TODO comment, but not important
enough to do right away?
> + * release semantics on the initialization of the mmu_notifier_mm's
> + * contents are provided for unlocked readers. acquire can only b...
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...; There is the one 'take the lock' outlier in
> __mmu_range_notifier_insert() though
>
>>> +static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
>>> +{
>>> + struct mmu_range_notifier *mrn;
>>> + struct hlist_node *next;
>>> + bool need_wake = false;
>>> +
>>> + spin_lock(&mmn_mm->lock);
>>> + if (--mmn_mm->active_invalidate_ranges ||
>>> + !mn_itree_is_invalidating(mmn_mm)) {
>>> + spin_unlock(&mmn_mm->lock);
>>> + return;
>>> + }
>>> +
>...
2019 Nov 07
5
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...t;end - 1);
> + if (!node)
> + return NULL;
> + return container_of(node, struct mmu_range_notifier, interval_tree);
> +}
> +
> +static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
> +{
> + struct mmu_range_notifier *mrn;
> + struct hlist_node *next;
> + bool need_wake = false;
> +
> + spin_lock(&mmn_mm->lock);
> + if (--mmn_mm->active_invalidate_ranges ||
> + !mn_itree_is_invalidating(mmn_mm)) {
> + spin_unlock(&mmn_mm->lock);
> + return;
> + }
> +
> + mmn_mm->invalidate_seq++;
Is this the right place for an...
2019 Nov 07
1
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
..._itree_is_invalidating().
There is the one 'take the lock' outlier in
__mmu_range_notifier_insert() though
> > +static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
> > +{
> > + struct mmu_range_notifier *mrn;
> > + struct hlist_node *next;
> > + bool need_wake = false;
> > +
> > + spin_lock(&mmn_mm->lock);
> > + if (--mmn_mm->active_invalidate_ranges ||
> > + !mn_itree_is_invalidating(mmn_mm)) {
> > + spin_unlock(&mmn_mm->lock);
> > + return;
> > + }
> > +
> > + mmn_mm->inva...
2019 Nov 13
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...;
> Nitpick: That comment can be condensed into one less line:
The rtnl_unlock can move up a line too. My editor is failing me on
this.
> > + /*
> > + * TODO: Since we already have a spinlock above, this would be faster
> > + * as wake_up_q
> > + */
> > + if (need_wake)
> > + wake_up_all(&mmn_mm->wq);
>
> So why is this important enough for a TODO comment, but not important
> enough to do right away?
Lets drop the comment, I'm noto sure wake_up_q is even a function this
layer should be calling.
> > + * release semantics on t...
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...turn NULL;
> > + return container_of(node, struct mmu_range_notifier, interval_tree);
> > +}
> > +
> > +static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
> > +{
> > + struct mmu_range_notifier *mrn;
> > + struct hlist_node *next;
> > + bool need_wake = false;
> > +
> > + spin_lock(&mmn_mm->lock);
> > + if (--mmn_mm->active_invalidate_ranges ||
> > + !mn_itree_is_invalidating(mmn_mm)) {
> > + spin_unlock(&mmn_mm->lock);
> > + return;
> > + }
> > +
> > + mmn_mm->inva...
2019 Oct 28
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...terval_tree, range->start,
+ range->end - 1);
+ if (!node)
+ return NULL;
+ return container_of(node, struct mmu_range_notifier, interval_tree);
+}
+
+static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
+{
+ struct mmu_range_notifier *mrn;
+ struct hlist_node *next;
+ bool need_wake = false;
+
+ spin_lock(&mmn_mm->lock);
+ if (--mmn_mm->active_invalidate_ranges ||
+ !mn_itree_is_invalidating(mmn_mm)) {
+ spin_unlock(&mmn_mm->lock);
+ return;
+ }
+
+ mmn_mm->invalidate_seq++;
+ need_wake = true;
+
+ /*
+ * The inv_end incorporates a deferred mechanis...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
..._tree, range->start,
+ range->end - 1);
+ if (!node)
+ return NULL;
+ return container_of(node, struct mmu_interval_notifier, interval_tree);
+}
+
+static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
+{
+ struct mmu_interval_notifier *mni;
+ struct hlist_node *next;
+ bool need_wake = false;
+
+ spin_lock(&mmn_mm->lock);
+ if (--mmn_mm->active_invalidate_ranges ||
+ !mn_itree_is_invalidating(mmn_mm)) {
+ spin_unlock(&mmn_mm->lock);
+ return;
+ }
+
+ /* Make invalidate_seq even */
+ mmn_mm->invalidate_seq++;
+ need_wake = true;
+
+ /*
+ * The inv_end...
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others