Jason Wang
2019-Mar-11 07:13 UTC
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/8 ??10:12, Christoph Hellwig wrote:> On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: >> This series tries to access virtqueue metadata through kernel virtual >> address instead of copy_user() friends since they had too much >> overheads like checks, spec barriers or even hardware feature >> toggling. This is done through setup kernel address through vmap() and >> resigter MMU notifier for invalidation. >> >> Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see >> obvious improvement. > How is this going to work for CPUs with virtually tagged caches?Anything different that you worry? I can have a test but do you know any archs that use virtual tag cache? Thanks
Michael S. Tsirkin
2019-Mar-11 13:59 UTC
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote:> > On 2019/3/8 ??10:12, Christoph Hellwig wrote: > > On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: > > > This series tries to access virtqueue metadata through kernel virtual > > > address instead of copy_user() friends since they had too much > > > overheads like checks, spec barriers or even hardware feature > > > toggling. This is done through setup kernel address through vmap() and > > > resigter MMU notifier for invalidation. > > > > > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see > > > obvious improvement. > > How is this going to work for CPUs with virtually tagged caches? > > > Anything different that you worry?If caches have virtual tags then kernel and userspace view of memory might not be automatically in sync if they access memory through different virtual addresses. You need to do things like flush_cache_page, probably multiple times.> I can have a test but do you know any > archs that use virtual tag cache?sparc I believe.> Thanks
David Miller
2019-Mar-11 18:14 UTC
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
From: "Michael S. Tsirkin" <mst at redhat.com> Date: Mon, 11 Mar 2019 09:59:28 -0400> On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote: >> >> On 2019/3/8 ??10:12, Christoph Hellwig wrote: >> > On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote: >> > > This series tries to access virtqueue metadata through kernel virtual >> > > address instead of copy_user() friends since they had too much >> > > overheads like checks, spec barriers or even hardware feature >> > > toggling. This is done through setup kernel address through vmap() and >> > > resigter MMU notifier for invalidation. >> > > >> > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see >> > > obvious improvement. >> > How is this going to work for CPUs with virtually tagged caches? >> >> >> Anything different that you worry? > > If caches have virtual tags then kernel and userspace view of memory > might not be automatically in sync if they access memory > through different virtual addresses. You need to do things like > flush_cache_page, probably multiple times."flush_dcache_page()"
Apparently Analagous Threads
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
- [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()