Displaying 13 results from an estimated 13 matches for "resigter".
Did you mean:
register
2019 Mar 11
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...is series tries to access virtqueue metadata through kernel virtual
>> address instead of copy_user() friends since they had too much
>> overheads like checks, spec barriers or even hardware feature
>> toggling. This is done through setup kernel address through vmap() and
>> resigter MMU notifier for invalidation.
>>
>> Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
>> obvious improvement.
> How is this going to work for CPUs with virtually tagged caches?
Anything different that you worry? I can have a test but do you know any
arc...
2019 Mar 11
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...is series tries to access virtqueue metadata through kernel virtual
>> address instead of copy_user() friends since they had too much
>> overheads like checks, spec barriers or even hardware feature
>> toggling. This is done through setup kernel address through vmap() and
>> resigter MMU notifier for invalidation.
>>
>> Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
>> obvious improvement.
> How is this going to work for CPUs with virtually tagged caches?
Anything different that you worry? I can have a test but do you know any
arc...
2019 Mar 11
4
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...data through kernel virtual
>> > > address instead of copy_user() friends since they had too much
>> > > overheads like checks, spec barriers or even hardware feature
>> > > toggling. This is done through setup kernel address through vmap() and
>> > > resigter MMU notifier for invalidation.
>> > >
>> > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
>> > > obvious improvement.
>> > How is this going to work for CPUs with virtually tagged caches?
>>
>>
>> Anything d...
2019 Mar 11
4
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...data through kernel virtual
>> > > address instead of copy_user() friends since they had too much
>> > > overheads like checks, spec barriers or even hardware feature
>> > > toggling. This is done through setup kernel address through vmap() and
>> > > resigter MMU notifier for invalidation.
>> > >
>> > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
>> > > obvious improvement.
>> > How is this going to work for CPUs with virtually tagged caches?
>>
>>
>> Anything d...
2019 Mar 08
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...g wrote:
> This series tries to access virtqueue metadata through kernel virtual
> address instead of copy_user() friends since they had too much
> overheads like checks, spec barriers or even hardware feature
> toggling. This is done through setup kernel address through vmap() and
> resigter MMU notifier for invalidation.
>
> Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
> obvious improvement.
How is this going to work for CPUs with virtually tagged caches?
2019 Mar 11
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...s virtqueue metadata through kernel virtual
> > > address instead of copy_user() friends since they had too much
> > > overheads like checks, spec barriers or even hardware feature
> > > toggling. This is done through setup kernel address through vmap() and
> > > resigter MMU notifier for invalidation.
> > >
> > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
> > > obvious improvement.
> > How is this going to work for CPUs with virtually tagged caches?
>
>
> Anything different that you worry?
If...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...ough kernel virtual
>>>>> address instead of copy_user() friends since they had too much
>>>>> overheads like checks, spec barriers or even hardware feature
>>>>> toggling. This is done through setup kernel address through vmap() and
>>>>> resigter MMU notifier for invalidation.
>>>>>
>>>>> Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
>>>>> obvious improvement.
>>>> How is this going to work for CPUs with virtually tagged caches?
>>>
>>> Anyth...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...>>>> much
>>>>>>> overheads like checks, spec barriers or even hardware
>>>>>>> feature
>>>>>>> toggling. This is done through setup kernel address through
>>>>>>> vmap() and
>>>>>>> resigter MMU notifier for invalidation.
>>>>>>>
>>>>>>> Test shows about 24% improvement on TX PPS. TCP_STREAM
>>>>>>> doesn't see
>>>>>>> obvious improvement.
>>>>>> How is this going to work for CPUs...
2019 Mar 12
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...>>>> address instead of copy_user() friends since they had too much
>>>>>>> overheads like checks, spec barriers or even hardware feature
>>>>>>> toggling. This is done through setup kernel address through vmap() and
>>>>>>> resigter MMU notifier for invalidation.
>>>>>>>
>>>>>>> Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
>>>>>>> obvious improvement.
>>>>>> How is this going to work for CPUs with virtually tagged caches?...
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...gt; > > address instead of copy_user() friends since they had too much
> > > > > > overheads like checks, spec barriers or even hardware feature
> > > > > > toggling. This is done through setup kernel address through vmap() and
> > > > > > resigter MMU notifier for invalidation.
> > > > > >
> > > > > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
> > > > > > obvious improvement.
> > > > > How is this going to work for CPUs with virtually tagged cac...
2019 Mar 12
9
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...gt; > > address instead of copy_user() friends since they had too much
> > > > > > overheads like checks, spec barriers or even hardware feature
> > > > > > toggling. This is done through setup kernel address through vmap() and
> > > > > > resigter MMU notifier for invalidation.
> > > > > >
> > > > > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
> > > > > > obvious improvement.
> > > > > How is this going to work for CPUs with virtually tagged cac...
2019 Mar 06
12
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling. This is done through setup kernel address through vmap() and
resigter MMU notifier for invalidation.
Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
obvious improvement.
Thanks
Changes from V4:
- use invalidate_range() instead of invalidate_range_start()
- track dirty pages
Changes from V3:
- don't try to use vmap for file backed pages
-...
2019 Mar 06
12
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling. This is done through setup kernel address through vmap() and
resigter MMU notifier for invalidation.
Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
obvious improvement.
Thanks
Changes from V4:
- use invalidate_range() instead of invalidate_range_start()
- track dirty pages
Changes from V3:
- don't try to use vmap for file backed pages
-...