David Miller
2019-Mar-11 18:14 UTC
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
From: "Michael S. Tsirkin" <mst at redhat.com> Date: Mon, 11 Mar 2019 09:59:28 -0400> On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: >> >> On 2019/3/8 ??10:12, Christoph Hellwig wrote: >> > On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: >> > > This series tries to access virtqueue metadata through kernel virtual >> > > address instead of copy_user() friends since they had too much >> > > overheads like checks, spec barriers or even hardware feature >> > > toggling. This is done through setup kernel address through vmap() and >> > > resigter MMU notifier for invalidation. >> > > >> > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see >> > > obvious improvement. >> > How is this going to work for CPUs with virtually tagged caches? >> >> >> Anything different that you worry? > > If caches have virtual tags then kernel and userspace view of memory > might not be automatically in sync if they access memory > through different virtual addresses. You need to do things like > flush_cache_page, probably multiple times."flush_dcache_page()"
Jason Wang
2019-Mar-12 02:59 UTC
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/12 ??2:14, David Miller wrote:> From: "Michael S. Tsirkin" <mst at redhat.com> > Date: Mon, 11 Mar 2019 09:59:28 -0400 > >> On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: >>> On 2019/3/8 ??10:12, Christoph Hellwig wrote: >>>> On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: >>>>> This series tries to access virtqueue metadata through kernel virtual >>>>> address instead of copy_user() friends since they had too much >>>>> overheads like checks, spec barriers or even hardware feature >>>>> toggling. This is done through setup kernel address through vmap() and >>>>> resigter MMU notifier for invalidation. >>>>> >>>>> Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see >>>>> obvious improvement. >>>> How is this going to work for CPUs with virtually tagged caches? >>> >>> Anything different that you worry? >> If caches have virtual tags then kernel and userspace view of memory >> might not be automatically in sync if they access memory >> through different virtual addresses. You need to do things like >> flush_cache_page, probably multiple times. > "flush_dcache_page()"I get this. Then I think the current set_bit_to_user() is suspicious, we probably miss a flush_dcache_page() there: static int set_bit_to_user(int nr, void __user *addr) { ??????? unsigned long log = (unsigned long)addr; ??????? struct page *page; ??????? void *base; ??????? int bit = nr + (log % PAGE_SIZE) * 8; ??????? int r; ??????? r = get_user_pages_fast(log, 1, 1, &page); ??????? if (r < 0) ??????????????? return r; ??????? BUG_ON(r != 1); ??????? base = kmap_atomic(page); ??????? set_bit(bit, base); ??????? kunmap_atomic(base); ??????? set_page_dirty_lock(page); ??????? put_page(page); ??????? return 0; } Thanks
Michael S. Tsirkin
2019-Mar-12 03:52 UTC
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On Tue, Mar 12, 2019 at 10:59:09AM +0800, Jason Wang wrote:> > On 2019/3/12 ??2:14, David Miller wrote: > > From: "Michael S. Tsirkin" <mst at redhat.com> > > Date: Mon, 11 Mar 2019 09:59:28 -0400 > > > > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: > > > > On 2019/3/8 ??10:12, Christoph Hellwig wrote: > > > > > On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: > > > > > > This series tries to access virtqueue metadata through kernel virtual > > > > > > address instead of copy_user() friends since they had too much > > > > > > overheads like checks, spec barriers or even hardware feature > > > > > > toggling. This is done through setup kernel address through vmap() and > > > > > > resigter MMU notifier for invalidation. > > > > > > > > > > > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see > > > > > > obvious improvement. > > > > > How is this going to work for CPUs with virtually tagged caches? > > > > > > > > Anything different that you worry? > > > If caches have virtual tags then kernel and userspace view of memory > > > might not be automatically in sync if they access memory > > > through different virtual addresses. You need to do things like > > > flush_cache_page, probably multiple times. > > "flush_dcache_page()" > > > I get this. Then I think the current set_bit_to_user() is suspicious, we > probably miss a flush_dcache_page() there: > > > static int set_bit_to_user(int nr, void __user *addr) > { > ??????? unsigned long log = (unsigned long)addr; > ??????? struct page *page; > ??????? void *base; > ??????? int bit = nr + (log % PAGE_SIZE) * 8; > ??????? int r; > > ??????? r = get_user_pages_fast(log, 1, 1, &page); > ??????? if (r < 0) > ??????????????? return r; > ??????? BUG_ON(r != 1); > ??????? base = kmap_atomic(page); > ??????? set_bit(bit, base); > ??????? kunmap_atomic(base); > ??????? set_page_dirty_lock(page); > ??????? put_page(page); > ??????? return 0; > } > > ThanksI think you are right. The correct fix though is to re-implement it using asm and handling pagefault, not gup. Three atomic ops per bit is way to expensive. -- MST
Jason Wang
2019-Mar-12 07:51 UTC
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/12 ??1:14, James Bottomley wrote:> On Tue, 2019-03-12 at 10:59 +0800, Jason Wang wrote: >> On 2019/3/12 ??2:14, David Miller wrote: >>> From: "Michael S. Tsirkin" <mst at redhat.com> >>> Date: Mon, 11 Mar 2019 09:59:28 -0400 >>> >>>> On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: >>>>> On 2019/3/8 ??10:12, Christoph Hellwig wrote: >>>>>> On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: >>>>>>> This series tries to access virtqueue metadata through >>>>>>> kernel virtual >>>>>>> address instead of copy_user() friends since they had too >>>>>>> much >>>>>>> overheads like checks, spec barriers or even hardware >>>>>>> feature >>>>>>> toggling. This is done through setup kernel address through >>>>>>> vmap() and >>>>>>> resigter MMU notifier for invalidation. >>>>>>> >>>>>>> Test shows about 24% improvement on TX PPS. TCP_STREAM >>>>>>> doesn't see >>>>>>> obvious improvement. >>>>>> How is this going to work for CPUs with virtually tagged >>>>>> caches? >>>>> Anything different that you worry? >>>> If caches have virtual tags then kernel and userspace view of >>>> memory >>>> might not be automatically in sync if they access memory >>>> through different virtual addresses. You need to do things like >>>> flush_cache_page, probably multiple times. >>> "flush_dcache_page()" >> >> I get this. Then I think the current set_bit_to_user() is suspicious, >> we >> probably miss a flush_dcache_page() there: >> >> >> static int set_bit_to_user(int nr, void __user *addr) >> { >> unsigned long log = (unsigned long)addr; >> struct page *page; >> void *base; >> int bit = nr + (log % PAGE_SIZE) * 8; >> int r; >> >> r = get_user_pages_fast(log, 1, 1, &page); >> if (r < 0) >> return r; >> BUG_ON(r != 1); >> base = kmap_atomic(page); >> set_bit(bit, base); >> kunmap_atomic(base); > This sequence should be OK. get_user_pages() contains a flush which > clears the cache above the user virtual address, so on kmap, the page > is coherent at the new alias. On parisc at least, kunmap embodies a > flush_dcache_page() which pushes any changes in the cache above the > kernel virtual address back to main memory and makes it coherent again > for the user alias to pick it up.It would be good if kmap()/kunmap() can do this but looks like we can not assume this? For example, sparc's flush_dcache_page() doesn't do flush_dcache_page(). And bio_copy_data_iter() do flush_dcache_page() after kunmap_atomic(). Thanks> > James >
Jason Wang
2019-Mar-12 07:53 UTC
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/12 ??3:51, Jason Wang wrote:> > On 2019/3/12 ??1:14, James Bottomley wrote: >> On Tue, 2019-03-12 at 10:59 +0800, Jason Wang wrote: >>> On 2019/3/12 ??2:14, David Miller wrote: >>>> From: "Michael S. Tsirkin" <mst at redhat.com> >>>> Date: Mon, 11 Mar 2019 09:59:28 -0400 >>>> >>>>> On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: >>>>>> On 2019/3/8 ??10:12, Christoph Hellwig wrote: >>>>>>> On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: >>>>>>>> This series tries to access virtqueue metadata through >>>>>>>> kernel virtual >>>>>>>> address instead of copy_user() friends since they had too >>>>>>>> much >>>>>>>> overheads like checks, spec barriers or even hardware >>>>>>>> feature >>>>>>>> toggling. This is done through setup kernel address through >>>>>>>> vmap() and >>>>>>>> resigter MMU notifier for invalidation. >>>>>>>> >>>>>>>> Test shows about 24% improvement on TX PPS. TCP_STREAM >>>>>>>> doesn't see >>>>>>>> obvious improvement. >>>>>>> How is this going to work for CPUs with virtually tagged >>>>>>> caches? >>>>>> Anything different that you worry? >>>>> If caches have virtual tags then kernel and userspace view of >>>>> memory >>>>> might not be automatically in sync if they access memory >>>>> through different virtual addresses. You need to do things like >>>>> flush_cache_page, probably multiple times. >>>> "flush_dcache_page()" >>> >>> I get this. Then I think the current set_bit_to_user() is suspicious, >>> we >>> probably miss a flush_dcache_page() there: >>> >>> >>> static int set_bit_to_user(int nr, void __user *addr) >>> { >>> ????????? unsigned long log = (unsigned long)addr; >>> ????????? struct page *page; >>> ????????? void *base; >>> ????????? int bit = nr + (log % PAGE_SIZE) * 8; >>> ????????? int r; >>> >>> ????????? r = get_user_pages_fast(log, 1, 1, &page); >>> ????????? if (r < 0) >>> ????????????????? return r; >>> ????????? BUG_ON(r != 1); >>> ????????? base = kmap_atomic(page); >>> ????????? set_bit(bit, base); >>> ????????? kunmap_atomic(base); >> This sequence should be OK.? get_user_pages() contains a flush which >> clears the cache above the user virtual address, so on kmap, the page >> is coherent at the new alias.? On parisc at least, kunmap embodies a >> flush_dcache_page() which pushes any changes in the cache above the >> kernel virtual address back to main memory and makes it coherent again >> for the user alias to pick it up. > > > It would be good if kmap()/kunmap() can do this but looks like we can > not assume this? For example, sparc's flush_dcache_page()Sorry, I meant kunmap_atomic(). Thanks> doesn't do flush_dcache_page(). And bio_copy_data_iter() do > flush_dcache_page() after kunmap_atomic(). > > Thanks > > >> >> James >> >
Apparently Analagous Threads
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()