Michael S. Tsirkin
2019-Mar-11 12:48 UTC
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote:> > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > Hello Jeson, > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > Just to make sure I understand here. For boosting through huge TLB, do > > > you mean we can do that in the future (e.g by mapping more userspace > > > pages to kenrel) or it can be done by this series (only about three 4K > > > pages were vmapped per virtqueue)? > > When I answered about the advantages of mmu notifier and I mentioned > > guaranteed 2m/gigapages where available, I overlooked the detail you > > were using vmap instead of kmap. So with vmap you're actually doing > > the opposite, it slows down the access because it will always use a 4k > > TLB even if QEMU runs on THP or gigapages hugetlbfs. > > > > If there's just one page (or a few pages) in each vmap there's no need > > of vmap, the linearity vmap provides doesn't pay off in such > > case. > > > > So likely there's further room for improvement here that you can > > achieve in the current series by just dropping vmap/vunmap. > > > > You can just use kmap (or kmap_atomic if you're in preemptible > > section, should work from bh/irq). > > > > In short the mmu notifier to invalidate only sets a "struct page * > > userringpage" pointer to NULL without calls to vunmap. > > > > In all cases immediately after gup_fast returns you can always call > > put_page immediately (which explains why I'd like an option to drop > > FOLL_GET from gup_fast to speed it up). > > > > Then you can check the sequence_counter and inc/dec counter increased > > by _start/_end. That will tell you if the page you got and you called > > put_page to immediately unpin it or even to free it, cannot go away > > under you until the invalidate is called. > > > > If sequence counters and counter tells that gup_fast raced with anyt > > mmu notifier invalidate you can just repeat gup_fast. Otherwise you're > > done, the page cannot go away under you, the host virtual to host > > physical mapping cannot change either. And the page is not pinned > > either. So you can just set the "struct page * userringpage = page" > > where "page" was the one setup by gup_fast. > > > > When later the invalidate runs, you can just call set_page_dirty if > > gup_fast was called with "write = 1" and then you clear the pointer > > "userringpage = NULL". > > > > When you need to read/write to the memory > > kmap/kmap_atomic(userringpage) should work. > > > Yes, I've considered kmap() from the start. The reason I don't do that is > large virtqueue may need more than one page so VA might not be contiguous. > But this is probably not a big issue which just need more tricks in the > vhost memory accessors. > > > > > > In short because there's no hardware involvement here, the established > > mapping is just the pointer to the page, there is no need of setting > > up any pagetables or to do any TLB flushes (except on 32bit archs if > > the page is above the direct mapping but it never happens on 64bit > > archs). > > > I see, I believe we don't care much about the performance of 32bit archs (or > we can just fallback to copy_to_user() friends).Using copyXuser is better I guess.> Using direct mapping (I > guess kernel will always try hugepage for that?) should be better and we can > even use it for the data transfer not only for the metadata. > > ThanksWe can't really. The big issue is get user pages. Doing that on data path will be slower than copyXuser. Or maybe it won't with the amount of mitigations spread around. Go ahead and try.> > > > > Thanks, > > Andrea
Andrea Arcangeli
2019-Mar-11 13:43 UTC
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 08:48:37AM -0400, Michael S. Tsirkin wrote:> Using copyXuser is better I guess.It certainly would be faster there, but I don't think it's needed if that would be the only use case left that justifies supporting two different models. On small 32bit systems with little RAM kmap won't perform measurably different on 32bit or 64bit systems. If the 32bit host has a lot of ram it all gets slow anyway at accessing RAM above the direct mapping, if compared to 64bit host kernels, it's not just an issue for vhost + mmu notifier + kmap and the best way to optimize things is to run 64bit host kernels. Like Christoph pointed out, the main use case for retaining the copy-user model would be CPUs with virtually indexed not physically tagged data caches (they'll still suffer from the spectre-v1 fix, although I exclude they have to suffer the SMAP slowdown/feature). Those may require some additional flushing than the current copy-user model requires. As a rule of thumb any arch where copy_user_page doesn't define as copy_page will require some additional cache flushing after the kmap. Supposedly with vmap, the vmap layer should have taken care of that (I didn't verify that yet). There are some accessories like copy_to_user_page() copy_from_user_page() that could work and obviously defines to raw memcpy on x86 (the main cons is they don't provide word granular access) and at least on sparc they're tailored to ptrace assumptions so then we'd need to evaluate what happens if this is used outside of ptrace context. kmap has been used generally either to access whole pages (i.e. copy_user_page), so ptrace may actually be the only use case with subpage granularity access. #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ flush_cache_page(vma, vaddr, page_to_pfn(page)); \ memcpy(dst, src, len); \ flush_ptrace_access(vma, page, vaddr, src, len, 0); \ } while (0) So I wouldn't rule out the need for a dual model, until we solve how to run this stable on non-x86 arches with not physically tagged caches. Thanks, Andrea
Jason Wang
2019-Mar-12 02:52 UTC
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/11 ??8:48, Michael S. Tsirkin wrote:> On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: >> On 2019/3/9 ??3:48, Andrea Arcangeli wrote: >>> Hello Jeson, >>> >>> On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: >>>> Just to make sure I understand here. For boosting through huge TLB, do >>>> you mean we can do that in the future (e.g by mapping more userspace >>>> pages to kenrel) or it can be done by this series (only about three 4K >>>> pages were vmapped per virtqueue)? >>> When I answered about the advantages of mmu notifier and I mentioned >>> guaranteed 2m/gigapages where available, I overlooked the detail you >>> were using vmap instead of kmap. So with vmap you're actually doing >>> the opposite, it slows down the access because it will always use a 4k >>> TLB even if QEMU runs on THP or gigapages hugetlbfs. >>> >>> If there's just one page (or a few pages) in each vmap there's no need >>> of vmap, the linearity vmap provides doesn't pay off in such >>> case. >>> >>> So likely there's further room for improvement here that you can >>> achieve in the current series by just dropping vmap/vunmap. >>> >>> You can just use kmap (or kmap_atomic if you're in preemptible >>> section, should work from bh/irq). >>> >>> In short the mmu notifier to invalidate only sets a "struct page * >>> userringpage" pointer to NULL without calls to vunmap. >>> >>> In all cases immediately after gup_fast returns you can always call >>> put_page immediately (which explains why I'd like an option to drop >>> FOLL_GET from gup_fast to speed it up). >>> >>> Then you can check the sequence_counter and inc/dec counter increased >>> by _start/_end. That will tell you if the page you got and you called >>> put_page to immediately unpin it or even to free it, cannot go away >>> under you until the invalidate is called. >>> >>> If sequence counters and counter tells that gup_fast raced with anyt >>> mmu notifier invalidate you can just repeat gup_fast. Otherwise you're >>> done, the page cannot go away under you, the host virtual to host >>> physical mapping cannot change either. And the page is not pinned >>> either. So you can just set the "struct page * userringpage = page" >>> where "page" was the one setup by gup_fast. >>> >>> When later the invalidate runs, you can just call set_page_dirty if >>> gup_fast was called with "write = 1" and then you clear the pointer >>> "userringpage = NULL". >>> >>> When you need to read/write to the memory >>> kmap/kmap_atomic(userringpage) should work. >> Yes, I've considered kmap() from the start. The reason I don't do that is >> large virtqueue may need more than one page so VA might not be contiguous. >> But this is probably not a big issue which just need more tricks in the >> vhost memory accessors. >> >> >>> In short because there's no hardware involvement here, the established >>> mapping is just the pointer to the page, there is no need of setting >>> up any pagetables or to do any TLB flushes (except on 32bit archs if >>> the page is above the direct mapping but it never happens on 64bit >>> archs). >> I see, I believe we don't care much about the performance of 32bit archs (or >> we can just fallback to copy_to_user() friends). > Using copyXuser is better I guess.Ok.> >> Using direct mapping (I >> guess kernel will always try hugepage for that?) should be better and we can >> even use it for the data transfer not only for the metadata. >> >> Thanks > We can't really. The big issue is get user pages. Doing that on data > path will be slower than copyXuser.I meant if we can find a way to avoid doing gup in datapath. E.g vhost maintain a range tree and add or remove ranges through MMU notifier. Then in datapath, if we find the range, then use direct mapping otherwise copy_to_user(). Thanks> Or maybe it won't with the > amount of mitigations spread around. Go ahead and try. > >
Jason Wang
2019-Mar-12 02:56 UTC
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/11 ??9:43, Andrea Arcangeli wrote:> On Mon, Mar 11, 2019 at 08:48:37AM -0400, Michael S. Tsirkin wrote: >> Using copyXuser is better I guess. > It certainly would be faster there, but I don't think it's needed if > that would be the only use case left that justifies supporting two > different models. On small 32bit systems with little RAM kmap won't > perform measurably different on 32bit or 64bit systems. If the 32bit > host has a lot of ram it all gets slow anyway at accessing RAM above > the direct mapping, if compared to 64bit host kernels, it's not just > an issue for vhost + mmu notifier + kmap and the best way to optimize > things is to run 64bit host kernels. > > Like Christoph pointed out, the main use case for retaining the > copy-user model would be CPUs with virtually indexed not physically > tagged data caches (they'll still suffer from the spectre-v1 fix, > although I exclude they have to suffer the SMAP > slowdown/feature). Those may require some additional flushing than the > current copy-user model requires. > > As a rule of thumb any arch where copy_user_page doesn't define as > copy_page will require some additional cache flushing after the > kmap. Supposedly with vmap, the vmap layer should have taken care of > that (I didn't verify that yet).vmap_page_range()/free_unmap_vmap_area() will call fluch_cache_vmap()/flush_cache_vunmap(). So vmap layer should be ok. Thanks> > There are some accessories like copy_to_user_page() > copy_from_user_page() that could work and obviously defines to raw > memcpy on x86 (the main cons is they don't provide word granular > access) and at least on sparc they're tailored to ptrace assumptions > so then we'd need to evaluate what happens if this is used outside of > ptrace context. kmap has been used generally either to access whole > pages (i.e. copy_user_page), so ptrace may actually be the only use > case with subpage granularity access. > > #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ > do { \ > flush_cache_page(vma, vaddr, page_to_pfn(page)); \ > memcpy(dst, src, len); \ > flush_ptrace_access(vma, page, vaddr, src, len, 0); \ > } while (0) > > So I wouldn't rule out the need for a dual model, until we solve how > to run this stable on non-x86 arches with not physically tagged > caches. > > Thanks, > Andrea
Michael S. Tsirkin
2019-Mar-12 03:50 UTC
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Tue, Mar 12, 2019 at 10:52:15AM +0800, Jason Wang wrote:> > On 2019/3/11 ??8:48, Michael S. Tsirkin wrote: > > On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > > > Hello Jeson, > > > > > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > > > Just to make sure I understand here. For boosting through huge TLB, do > > > > > you mean we can do that in the future (e.g by mapping more userspace > > > > > pages to kenrel) or it can be done by this series (only about three 4K > > > > > pages were vmapped per virtqueue)? > > > > When I answered about the advantages of mmu notifier and I mentioned > > > > guaranteed 2m/gigapages where available, I overlooked the detail you > > > > were using vmap instead of kmap. So with vmap you're actually doing > > > > the opposite, it slows down the access because it will always use a 4k > > > > TLB even if QEMU runs on THP or gigapages hugetlbfs. > > > > > > > > If there's just one page (or a few pages) in each vmap there's no need > > > > of vmap, the linearity vmap provides doesn't pay off in such > > > > case. > > > > > > > > So likely there's further room for improvement here that you can > > > > achieve in the current series by just dropping vmap/vunmap. > > > > > > > > You can just use kmap (or kmap_atomic if you're in preemptible > > > > section, should work from bh/irq). > > > > > > > > In short the mmu notifier to invalidate only sets a "struct page * > > > > userringpage" pointer to NULL without calls to vunmap. > > > > > > > > In all cases immediately after gup_fast returns you can always call > > > > put_page immediately (which explains why I'd like an option to drop > > > > FOLL_GET from gup_fast to speed it up). > > > > > > > > Then you can check the sequence_counter and inc/dec counter increased > > > > by _start/_end. That will tell you if the page you got and you called > > > > put_page to immediately unpin it or even to free it, cannot go away > > > > under you until the invalidate is called. > > > > > > > > If sequence counters and counter tells that gup_fast raced with anyt > > > > mmu notifier invalidate you can just repeat gup_fast. Otherwise you're > > > > done, the page cannot go away under you, the host virtual to host > > > > physical mapping cannot change either. And the page is not pinned > > > > either. So you can just set the "struct page * userringpage = page" > > > > where "page" was the one setup by gup_fast. > > > > > > > > When later the invalidate runs, you can just call set_page_dirty if > > > > gup_fast was called with "write = 1" and then you clear the pointer > > > > "userringpage = NULL". > > > > > > > > When you need to read/write to the memory > > > > kmap/kmap_atomic(userringpage) should work. > > > Yes, I've considered kmap() from the start. The reason I don't do that is > > > large virtqueue may need more than one page so VA might not be contiguous. > > > But this is probably not a big issue which just need more tricks in the > > > vhost memory accessors. > > > > > > > > > > In short because there's no hardware involvement here, the established > > > > mapping is just the pointer to the page, there is no need of setting > > > > up any pagetables or to do any TLB flushes (except on 32bit archs if > > > > the page is above the direct mapping but it never happens on 64bit > > > > archs). > > > I see, I believe we don't care much about the performance of 32bit archs (or > > > we can just fallback to copy_to_user() friends). > > Using copyXuser is better I guess. > > > Ok. > > > > > > > Using direct mapping (I > > > guess kernel will always try hugepage for that?) should be better and we can > > > even use it for the data transfer not only for the metadata. > > > > > > Thanks > > We can't really. The big issue is get user pages. Doing that on data > > path will be slower than copyXuser. > > > I meant if we can find a way to avoid doing gup in datapath. E.g vhost > maintain a range tree and add or remove ranges through MMU notifier. Then in > datapath, if we find the range, then use direct mapping otherwise > copy_to_user(). > > ThanksWe can try. But I'm not sure there's any reason to think there's any locality there.> > > Or maybe it won't with the > > amount of mitigations spread around. Go ahead and try. > > > >
Reasonably Related Threads
- [RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
- [RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
- [RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
- [RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
- [RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address