search for: gigapag

Displaying 15 results from an estimated 15 matches for "gigapag".

Did you mean: gigapages
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...g through huge TLB, do > you mean we can do that in the future (e.g by mapping more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and I mentioned guaranteed 2m/gigapages where available, I overlooked the detail you were using vmap instead of kmap. So with vmap you're actually doing the opposite, it slows down the access because it will always use a 4k TLB even if QEMU runs on THP or gigapages hugetlbfs. If there's just one page (or a few pages) in each v...
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...g through huge TLB, do > you mean we can do that in the future (e.g by mapping more userspace > pages to kenrel) or it can be done by this series (only about three 4K > pages were vmapped per virtqueue)? When I answered about the advantages of mmu notifier and I mentioned guaranteed 2m/gigapages where available, I overlooked the detail you were using vmap instead of kmap. So with vmap you're actually doing the opposite, it slows down the access because it will always use a 4k TLB even if QEMU runs on THP or gigapages hugetlbfs. If there's just one page (or a few pages) in each v...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...we can do that in the future (e.g by mapping more userspace > > > pages to kenrel) or it can be done by this series (only about three 4K > > > pages were vmapped per virtqueue)? > > When I answered about the advantages of mmu notifier and I mentioned > > guaranteed 2m/gigapages where available, I overlooked the detail you > > were using vmap instead of kmap. So with vmap you're actually doing > > the opposite, it slows down the access because it will always use a 4k > > TLB even if QEMU runs on THP or gigapages hugetlbfs. > > > > If th...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...we can do that in the future (e.g by mapping more userspace > > > pages to kenrel) or it can be done by this series (only about three 4K > > > pages were vmapped per virtqueue)? > > When I answered about the advantages of mmu notifier and I mentioned > > guaranteed 2m/gigapages where available, I overlooked the detail you > > were using vmap instead of kmap. So with vmap you're actually doing > > the opposite, it slows down the access because it will always use a 4k > > TLB even if QEMU runs on THP or gigapages hugetlbfs. > > > > If th...
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
..., do >> you mean we can do that in the future (e.g by mapping more userspace >> pages to kenrel) or it can be done by this series (only about three 4K >> pages were vmapped per virtqueue)? > When I answered about the advantages of mmu notifier and I mentioned > guaranteed 2m/gigapages where available, I overlooked the detail you > were using vmap instead of kmap. So with vmap you're actually doing > the opposite, it slows down the access because it will always use a 4k > TLB even if QEMU runs on THP or gigapages hugetlbfs. > > If there's just one page (o...
2019 Mar 12
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...that in the future (e.g by mapping more userspace >>>> pages to kenrel) or it can be done by this series (only about three 4K >>>> pages were vmapped per virtqueue)? >>> When I answered about the advantages of mmu notifier and I mentioned >>> guaranteed 2m/gigapages where available, I overlooked the detail you >>> were using vmap instead of kmap. So with vmap you're actually doing >>> the opposite, it slows down the access because it will always use a 4k >>> TLB even if QEMU runs on THP or gigapages hugetlbfs. >>> >...
2019 Mar 12
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...pping more userspace > > > > > pages to kenrel) or it can be done by this series (only about three 4K > > > > > pages were vmapped per virtqueue)? > > > > When I answered about the advantages of mmu notifier and I mentioned > > > > guaranteed 2m/gigapages where available, I overlooked the detail you > > > > were using vmap instead of kmap. So with vmap you're actually doing > > > > the opposite, it slows down the access because it will always use a 4k > > > > TLB even if QEMU runs on THP or gigapages hugetlb...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...about runtime stuff that can change the moment copy-user has completed even before returning to userland, so there's no easy way to do it just once. On top of skipping the __uaccess_begin_nospec(), the mmu notifier soft vhost design will further boost the performance by guaranteeing the use of gigapages TLBs when available (or 2M TLBs worst case) even if QEMU runs on smaller pages. Thanks, Andrea
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...about runtime stuff that can change the moment copy-user has completed even before returning to userland, so there's no easy way to do it just once. On top of skipping the __uaccess_begin_nospec(), the mmu notifier soft vhost design will further boost the performance by guaranteeing the use of gigapages TLBs when available (or 2M TLBs worst case) even if QEMU runs on smaller pages. Thanks, Andrea
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...> change the moment copy-user has completed even before returning to > userland, so there's no easy way to do it just once. > > On top of skipping the __uaccess_begin_nospec(), the mmu notifier soft > vhost design will further boost the performance by guaranteeing the > use of gigapages TLBs when available (or 2M TLBs worst case) even if > QEMU runs on smaller pages. Just to make sure I understand here. For boosting through huge TLB, do you mean we can do that in the future (e.g by mapping more userspace pages to kenrel) or it can be done by this series (only about three...
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...shadow MMU). If you instead had to invalidate a secondary MMU mapping that isn't tracked by the driver (again: not vhost nor KVM case), you could have used the dirty bit of the kernel pagetable to call set_page_dirty and disambiguate but that's really messy, and it would prevent the use of gigapages in the direct mapping too and it'd require vmap for 4k tracking. To make sure set_page_dirty is run a single time no matter if the invalidate known when a mapping is tear down, I suggested the below model: access = FOLL_WRITE repeat: page = gup_fast(access) put_page(page) /* need a w...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > + .invalidate_range = vhost_invalidate_range, > +}; > + > void vhost_dev_init(struct vhost_dev *dev, > struct vhost_virtqueue **vqs, int nvqs, int iov_limit) > { I also wonder here: when page is write protected then it does not look like
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > + .invalidate_range = vhost_invalidate_range, > +}; > + > void vhost_dev_init(struct vhost_dev *dev, > struct vhost_virtqueue **vqs, int nvqs, int iov_limit) > { I also wonder here: when page is write protected then it does not look like