search for: mmu_notifier_unregister

Displaying 20 results from an estimated 59 matches for "mmu_notifier_unregister".

2019 Jul 23
1
[PATCH 2/6] vhost: validate MMU notifier registration
...if we get a signal. Userspace could retry in theory but it does not: this is userspace abi breakage since it used to only fail on invalid input. > @@ -960,7 +962,11 @@ void vhost_dev_cleanup(struct vhost_dev *dev) > } > if (dev->mm) { > #if VHOST_ARCH_CAN_ACCEL_UACCESS > - mmu_notifier_unregister(&dev->mmu_notifier, dev->mm); > + if (dev->has_notifier) { > + mmu_notifier_unregister(&dev->mmu_notifier, > + dev->mm); > + dev->has_notifier = false; > + } > #endif > mmput(dev->mm); > } > @@ -2065,8 +2071,10 @@ static long...
2019 Jul 23
0
[PATCH 2/6] vhost: validate MMU notifier registration
...read_list); @@ -731,6 +732,7 @@ long vhost_dev_set_owner(struct vhost_dev *dev) if (err) goto err_mmu_notifier; #endif + dev->has_notifier = true; return 0; @@ -960,7 +962,11 @@ void vhost_dev_cleanup(struct vhost_dev *dev) } if (dev->mm) { #if VHOST_ARCH_CAN_ACCEL_UACCESS - mmu_notifier_unregister(&dev->mmu_notifier, dev->mm); + if (dev->has_notifier) { + mmu_notifier_unregister(&dev->mmu_notifier, + dev->mm); + dev->has_notifier = false; + } #endif mmput(dev->mm); } @@ -2065,8 +2071,10 @@ static long vhost_vring_set_num_addr(struct vhost_dev *d,...
2019 Jul 23
10
[PATCH 0/6] Fixes for meta data acceleration
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Jason Wang (6): vhost: don't set uaddr for invalid address vhost: validate MMU notifier registration vhost: fix vhost map leak vhost: reset invalidate_count in vhost_set_vring_num_addr() vhost: mark dirty pages during map uninit vhost: don't do synchronize_rcu() in
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...e it can > do things like setting dirty pages and unmapping page.? It looks to me > seqlock doesn't provide things like this.? The seqlock is usually used to prevent a 2nd thread from accessing the VA while it is being changed by the mm. ie you use something seqlocky instead of the ugly mmu_notifier_unregister/register cycle. You are supposed to use something simple like a spinlock or mutex inside the invalidate_range_start to serialized tear down of the SPTEs with their accessors. > write_seqcount_begin() > > map = vq->map[X] > > write or read through map->addr directly > &g...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...e it can > do things like setting dirty pages and unmapping page.? It looks to me > seqlock doesn't provide things like this.? The seqlock is usually used to prevent a 2nd thread from accessing the VA while it is being changed by the mm. ie you use something seqlocky instead of the ugly mmu_notifier_unregister/register cycle. You are supposed to use something simple like a spinlock or mutex inside the invalidate_range_start to serialized tear down of the SPTEs with their accessors. > write_seqcount_begin() > > map = vq->map[X] > > write or read through map->addr directly > &g...
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...dirty pages and unmapping page.? It looks to me > > > seqlock doesn't provide things like this. > > The seqlock is usually used to prevent a 2nd thread from accessing the > > VA while it is being changed by the mm. ie you use something seqlocky > > instead of the ugly mmu_notifier_unregister/register cycle. > > > Yes, so we have two mappings: > > [1] vring address to VA > [2] VA to PA > > And have several readers and writers > > 1) set_vring_num_addr(): writer of both [1] and [2] > 2) MMU notifier: reader of [1] writer of [2] > 3) GUP: reader of...
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...dirty pages and unmapping page.? It looks to me > > > seqlock doesn't provide things like this. > > The seqlock is usually used to prevent a 2nd thread from accessing the > > VA while it is being changed by the mm. ie you use something seqlocky > > instead of the ugly mmu_notifier_unregister/register cycle. > > > Yes, so we have two mappings: > > [1] vring address to VA > [2] VA to PA > > And have several readers and writers > > 1) set_vring_num_addr(): writer of both [1] and [2] > 2) MMU notifier: reader of [1] writer of [2] > 3) GUP: reader of...
2019 Jul 23
1
[PATCH 4/6] vhost: reset invalidate_count in vhost_set_vring_num_addr()
On Tue, Jul 23, 2019 at 03:57:16AM -0400, Jason Wang wrote: > The vhost_set_vring_num_addr() could be called in the middle of > invalidate_range_start() and invalidate_range_end(). If we don't reset > invalidate_count after the un-registering of MMU notifier, the > invalidate_cont will run out of sync (e.g never reach zero). This will > in fact disable the fast accessor path.
2019 Aug 02
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...and unmapping page.? It looks to me >>>> seqlock doesn't provide things like this. >>> The seqlock is usually used to prevent a 2nd thread from accessing the >>> VA while it is being changed by the mm. ie you use something seqlocky >>> instead of the ugly mmu_notifier_unregister/register cycle. >> >> Yes, so we have two mappings: >> >> [1] vring address to VA >> [2] VA to PA >> >> And have several readers and writers >> >> 1) set_vring_num_addr(): writer of both [1] and [2] >> 2) MMU notifier: reader of [1] writer...
2019 Oct 29
1
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...t;list)) > + return; This seems to duplicate the conditions in __mmu_notifier_release. See my comments below, I think one of them is wrong. I suspect this one, because __mmu_notifier_release follows the same pattern as the other notifiers. > + > /* > * SRCU here will block mmu_notifier_unregister until > * ->release returns. > */ > id = srcu_read_lock(&srcu); > - hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) > + hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) > /* > * If ->release runs before mmu_notifi...
2019 Aug 01
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...things like setting dirty pages and unmapping page.? It looks to me >> seqlock doesn't provide things like this. > The seqlock is usually used to prevent a 2nd thread from accessing the > VA while it is being changed by the mm. ie you use something seqlocky > instead of the ugly mmu_notifier_unregister/register cycle. Yes, so we have two mappings: [1] vring address to VA [2] VA to PA And have several readers and writers 1) set_vring_num_addr(): writer of both [1] and [2] 2) MMU notifier: reader of [1] writer of [2] 3) GUP: reader of [1] writer of [2] 4) memory accessors: reader of [1] and [2...
2019 Oct 28
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...range_notifier_ops *ops; + struct hlist_node deferred_item; + unsigned long invalidate_seq; + struct mm_struct *mm; +}; + #ifdef CONFIG_MMU_NOTIFIER #ifdef CONFIG_LOCKDEP @@ -263,6 +289,78 @@ extern int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm); extern void mmu_notifier_unregister(struct mmu_notifier *mn, struct mm_struct *mm); + +unsigned long mmu_range_read_begin(struct mmu_range_notifier *mrn); +int mmu_range_notifier_insert(struct mmu_range_notifier *mrn, + unsigned long start, unsigned long length, + struct mm_struct *mm); +int mmu_range_notifie...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...erval_notifier_ops *ops; + struct mm_struct *mm; + struct hlist_node deferred_item; + unsigned long invalidate_seq; +}; + #ifdef CONFIG_MMU_NOTIFIER #ifdef CONFIG_LOCKDEP @@ -263,6 +289,81 @@ extern int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm); extern void mmu_notifier_unregister(struct mmu_notifier *mn, struct mm_struct *mm); + +unsigned long mmu_interval_read_begin(struct mmu_interval_notifier *mni); +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni, + struct mm_struct *mm, unsigned long start, + unsigned long length, + const struct...
2019 Jan 04
1
[RFC PATCH V3 5/5] vhost: access vq metadata through kernel virtual address
...ue); > +} > + > void vhost_dev_cleanup(struct vhost_dev *dev) > { > int i; > @@ -661,8 +804,12 @@ void vhost_dev_cleanup(struct vhost_dev *dev) > kthread_stop(dev->worker); > dev->worker = NULL; > } > - if (dev->mm) > + if (dev->mm) { > + mmu_notifier_unregister(&dev->mmu_notifier, dev->mm); > mmput(dev->mm); > + } > + for (i = 0; i < dev->nvqs; i++) > + vhost_clean_vmaps(dev->vqs[i]); > dev->mm = NULL; > } > EXPORT_SYMBOL_GPL(vhost_dev_cleanup); > @@ -891,6 +1038,16 @@ static inline void __user *__v...
2019 Nov 07
2
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...are visible before > * any checks we do on mmn_mm below as otherwise CPU might re-order write done > * by another CPU core to mm->mmu_notifier_mm structure fields after the read > * belows. > */ This comment made it, just at the store side: /* * Serialize the update against mmu_notifier_unregister. A * side note: mmu_notifier_release can't run concurrently with * us because we hold the mm_users pin (either implicitly as * current->mm or explicitly with get_task_mm() or similar). * We can't race against any other mmu notifier method either * thanks to mm_take_all_locks()....
2019 Sep 06
1
[PATCH 1/2] Revert "vhost: access vq metadata through kernel virtual address"
...; - > void vhost_dev_cleanup(struct vhost_dev *dev) > { > int i; > @@ -957,16 +684,8 @@ void vhost_dev_cleanup(struct vhost_dev *dev) > kthread_stop(dev->worker); > dev->worker = NULL; > } > - if (dev->mm) { > -#if VHOST_ARCH_CAN_ACCEL_UACCESS > - mmu_notifier_unregister(&dev->mmu_notifier, dev->mm); > -#endif > + if (dev->mm) > mmput(dev->mm); > - } > -#if VHOST_ARCH_CAN_ACCEL_UACCESS > - for (i = 0; i < dev->nvqs; i++) > - vhost_uninit_vq_maps(dev->vqs[i]); > -#endif > dev->mm = NULL; > } > EXPO...
2018 Dec 29
0
[RFC PATCH V3 5/5] vhost: access vq metadata through kernel virtual address
...ng, used, + vhost_get_used_size(vq, vq->num), true); +} + void vhost_dev_cleanup(struct vhost_dev *dev) { int i; @@ -661,8 +804,12 @@ void vhost_dev_cleanup(struct vhost_dev *dev) kthread_stop(dev->worker); dev->worker = NULL; } - if (dev->mm) + if (dev->mm) { + mmu_notifier_unregister(&dev->mmu_notifier, dev->mm); mmput(dev->mm); + } + for (i = 0; i < dev->nvqs; i++) + vhost_clean_vmaps(dev->vqs[i]); dev->mm = NULL; } EXPORT_SYMBOL_GPL(vhost_dev_cleanup); @@ -891,6 +1038,16 @@ static inline void __user *__vhost_get_user(struct vhost_virtqueue *vq...
2019 Oct 28
0
[PATCH v2 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled
...-}; - #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) -struct mmu_notifier_range { - struct vm_area_struct *vma; - struct mm_struct *mm; - unsigned long start; - unsigned long end; - unsigned flags; - enum mmu_notifier_event event; -}; - struct mmu_notifier_ops { /* * Called either by mmu_notifier_unregister or when the mm is @@ -249,6 +222,21 @@ struct mmu_notifier { unsigned int users; }; +#ifdef CONFIG_MMU_NOTIFIER + +#ifdef CONFIG_LOCKDEP +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map; +#endif + +struct mmu_notifier_range { + struct vm_area_struct *vma; + struct mm_struct...
2019 Aug 07
11
[PATCH V3 00/10] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V2: - use seqlck helper to synchronize MMU notifier with vhost worker Changes from V1: - try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the range is overlapped with metadata Jason Wang (9):
2019 Nov 12
0
[PATCH v3 01/14] mm/mmu_notifier: define the header pre-processor parts even if disabled
...-}; - #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) -struct mmu_notifier_range { - struct vm_area_struct *vma; - struct mm_struct *mm; - unsigned long start; - unsigned long end; - unsigned flags; - enum mmu_notifier_event event; -}; - struct mmu_notifier_ops { /* * Called either by mmu_notifier_unregister or when the mm is @@ -249,6 +222,21 @@ struct mmu_notifier { unsigned int users; }; +#ifdef CONFIG_MMU_NOTIFIER + +#ifdef CONFIG_LOCKDEP +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map; +#endif + +struct mmu_notifier_range { + struct vm_area_struct *vma; + struct mm_struct...