search for: sptes

Displaying 20 results from an estimated 36 matches for "sptes".

Did you mean: ptes
2019 Nov 07
1
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...28,26 @@ struct mmu_notifier { > > unsigned int users; > > }; > > > > That should also be moved down, next to the new structs. Which this? > > +/** > > + * struct mmu_range_notifier_ops > > + * @invalidate: Upon return the caller must stop using any SPTEs within this > > + * range, this function can sleep. Return false if blocking was > > + * required but range is non-blocking > > + */ > > How about this (I'm not sure I fully understand the return value, though): > > /** > * struct mm...
2019 Nov 07
5
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...n about classic MMNs. It would be nice if it were clearer that that documentation is not relevant to MNRs. Actually, this is another reason that a separate header file would be nice. > +/** > + * struct mmu_range_notifier_ops > + * @invalidate: Upon return the caller must stop using any SPTEs within this > + * range, this function can sleep. Return false if blocking was > + * required but range is non-blocking > + */ How about this (I'm not sure I fully understand the return value, though): /** * struct mmu_range_notifier_ops * @invalidate: Upo...
2019 Nov 08
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
On Thu, Nov 07, 2019 at 12:53:56PM -0800, John Hubbard wrote: > > > > +/** > > > > + * struct mmu_range_notifier_ops > > > > + * @invalidate: Upon return the caller must stop using any SPTEs within this > > > > + * range, this function can sleep. Return false if blocking was > > > > + * required but range is non-blocking > > > > + */ > > > > > > How about this (I'm not sure I fully understand the retur...
2019 Nov 07
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...FIER_RANGE_BLOCKABLE, above. Trying to put all the new range notifier stuff in one place. But maybe not, if these are really not as separate as I thought. > >>> +/** >>> + * struct mmu_range_notifier_ops >>> + * @invalidate: Upon return the caller must stop using any SPTEs within this >>> + * range, this function can sleep. Return false if blocking was >>> + * required but range is non-blocking >>> + */ >> >> How about this (I'm not sure I fully understand the return value, though): >> >&gt...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...event a 2nd thread from accessing the VA while it is being changed by the mm. ie you use something seqlocky instead of the ugly mmu_notifier_unregister/register cycle. You are supposed to use something simple like a spinlock or mutex inside the invalidate_range_start to serialized tear down of the SPTEs with their accessors. > write_seqcount_begin() > > map = vq->map[X] > > write or read through map->addr directly > > write_seqcount_end() > > > There's no rmb() in write_seqcount_begin(), so map could be read before > write_seqcount_begin(), but it l...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...event a 2nd thread from accessing the VA while it is being changed by the mm. ie you use something seqlocky instead of the ugly mmu_notifier_unregister/register cycle. You are supposed to use something simple like a spinlock or mutex inside the invalidate_range_start to serialized tear down of the SPTEs with their accessors. > write_seqcount_begin() > > map = vq->map[X] > > write or read through map->addr directly > > write_seqcount_end() > > > There's no rmb() in write_seqcount_begin(), so map could be read before > write_seqcount_begin(), but it l...
2020 Jan 13
0
[PATCH v6 1/6] mm/mmu_notifier: add mmu_interval_notifier_insert_safe()
...present in the interval tree yet. - * The caller must use the normal interval notifier read flow via + * Upon return, the mmu_interval_notifier may not be present in the interval + * tree yet. The caller must use the normal interval notifier read flow via * mmu_interval_read_begin() to establish SPTEs for this range. */ int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni, @@ -969,6 +970,42 @@ int mmu_interval_notifier_insert_locked( } EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert_locked); +/** + * mmu_interval_notifier_insert_safe - Insert an interval notifier + * @mni: In...
2019 Aug 01
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...since the critical section was increased, the worst case is to wait guest memory to be swapped in, this could be even slower than synchronize_rcu(). > > You are supposed to use something simple like a spinlock or mutex > inside the invalidate_range_start to serialized tear down of the SPTEs > with their accessors. Technically yes, but we probably can't afford that for vhost fast path, the atomics eliminate almost all the performance improvement brought by this patch on a machine without SMAP. > >> write_seqcount_begin() >> >> map = vq->map[X] >&...
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2006 Jul 01
3
Page fault is 4 times faster with XI shadow mechanism
Hello Han, I am pleased you approve of the design and implementation of the XI shadow mechanism. And I appreciate the time and care you''ve taken in reviewing this substantial body of new code. You asked about performance statistics. With the current XI patch, we are seeing the following: - page faults times for XI are about 4 times faster than non-XI: 10.56 (non-XI) vs 2.43
2019 Oct 28
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...NOTIFY_PROTECTION_PAGE, MMU_NOTIFY_SOFT_DIRTY, + MMU_NOTIFY_RELEASE, }; #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) @@ -222,6 +228,26 @@ struct mmu_notifier { unsigned int users; }; +/** + * struct mmu_range_notifier_ops + * @invalidate: Upon return the caller must stop using any SPTEs within this + * range, this function can sleep. Return false if blocking was + * required but range is non-blocking + */ +struct mmu_range_notifier_ops { + bool (*invalidate)(struct mmu_range_notifier *mrn, + const struct mmu_notifier_range *range, + unsigned lon...
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...of what happens if two thread calls a mmu notifier invalidate simultaneously. The first mmu notifier could call set_page_dirty and then proceed in try_to_free_buffers or page_mkclean and then the concurrent mmu notifier that arrives second, then must not call set_page_dirty a second time. With KVM sptes mappings and vhost mappings you would call set_page_dirty (if you invoked gup with FOLL_WRITE) only when effectively tearing down any secondary mapping (you've got pointers in both cases for the mapping). So there's no way to risk a double set_page_dirty from concurrent mmu notifier invalid...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...IFY_PROTECTION_PAGE, MMU_NOTIFY_SOFT_DIRTY, + MMU_NOTIFY_RELEASE, }; #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) @@ -222,6 +228,26 @@ struct mmu_notifier { unsigned int users; }; +/** + * struct mmu_interval_notifier_ops + * @invalidate: Upon return the caller must stop using any SPTEs within this + * range. This function can sleep. Return false only if sleeping + * was required but mmu_notifier_range_blockable(range) is false. + */ +struct mmu_interval_notifier_ops { + bool (*invalidate)(struct mmu_interval_notifier *mni, + const struct mmu_notifie...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...ids the > > _start callback basically, but to avoid the _start callback safely, it > > has to be called in between the ptep_clear_flush and the set_pte_at > > whenever the pfn changes like during a COW. So it cannot be coalesced > > in a single TLB flush that invalidates all sptes in a range like we > > prefer for performance reasons for example in KVM. It also cannot > > sleep. > > > > In short ->invalidate_range must be really fast (it shouldn't require > > to send IPI to all other CPUs like KVM may require during an > > invalida...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...ids the > > _start callback basically, but to avoid the _start callback safely, it > > has to be called in between the ptep_clear_flush and the set_pte_at > > whenever the pfn changes like during a COW. So it cannot be coalesced > > in a single TLB flush that invalidates all sptes in a range like we > > prefer for performance reasons for example in KVM. It also cannot > > sleep. > > > > In short ->invalidate_range must be really fast (it shouldn't require > > to send IPI to all other CPUs like KVM may require during an > > invalida...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
.../_end, is that ->invalidate_range avoids the _start callback basically, but to avoid the _start callback safely, it has to be called in between the ptep_clear_flush and the set_pte_at whenever the pfn changes like during a COW. So it cannot be coalesced in a single TLB flush that invalidates all sptes in a range like we prefer for performance reasons for example in KVM. It also cannot sleep. In short ->invalidate_range must be really fast (it shouldn't require to send IPI to all other CPUs like KVM may require during an invalidate_range_start) and it must not sleep, in order to prefer it...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
.../_end, is that ->invalidate_range avoids the _start callback basically, but to avoid the _start callback safely, it has to be called in between the ptep_clear_flush and the set_pte_at whenever the pfn changes like during a COW. So it cannot be coalesced in a single TLB flush that invalidates all sptes in a range like we prefer for performance reasons for example in KVM. It also cannot sleep. In short ->invalidate_range must be really fast (it shouldn't require to send IPI to all other CPUs like KVM may require during an invalidate_range_start) and it must not sleep, in order to prefer it...
2020 Jan 13
0
[PATCH v6 3/6] mm/notifier: add mmu_interval_notifier_update()
...ates the range being monitored and is safe to call from + * the invalidate() callback function. + * Upon return, the mmu_interval_notifier range may not be updated in the + * interval tree yet. The caller must use the normal interval notifier read + * flow via mmu_interval_read_begin() to establish SPTEs for this range. + */ +void mmu_interval_notifier_update(struct mmu_interval_notifier *mni, + unsigned long start, unsigned long last) +{ + struct mm_struct *mm = mni->mm; + struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm; + unsigned long seq = 0; + + if (WARN_ON(start >= last)) +...
2020 Jul 22
34
[RFC PATCH v1 00/34] VM introspection - EPT Views and Virtualization Exceptions
...trospection: extend the access rights database with EPT view info KVM: introspection: extend KVMI_VM_SET_PAGE_ACCESS with EPT view info KVM: introspection: clean non-default EPTs on unhook KVM: x86: mmu: fix: update present_mask in spte_read_protect() KVM: vmx: trigger vm-exits for mmio sptes by default when #VE is enabled KVM: x86: svm: set .clear_page() KVM: x86: add .set_ve_info() KVM: x86: add .disable_ve() KVM: x86: page_track: add support for suppress #VE bit KVM: vmx: make use of EPTP_INDEX in vmx_handle_exit() KVM: vmx: make use of EPTP_INDEX in vmx_set_ept_view(...