Displaying 20 results from an estimated 33 matches for "nosmap".
2019 Jan 07
2
[RFC PATCH V3 0/5] Hi:
...ends since they had too much
>> overheads like checks, spec barriers or even hardware feature
>> toggling.
> Will review, thanks!
> One questions that comes to mind is whether it's all about bypassing
> stac/clac. Could you please include a performance comparison with
> nosmap?
>
On machine without SMAP (Sandy Bridge):
Before: 4.8Mpps
After: 5.2Mpps
On machine with SMAP (Broadwell):
Before: 5.0Mpps
After: 6.1Mpps
No smap: 7.5Mpps
Thanks
2019 Jan 07
2
[RFC PATCH V3 0/5] Hi:
...ends since they had too much
>> overheads like checks, spec barriers or even hardware feature
>> toggling.
> Will review, thanks!
> One questions that comes to mind is whether it's all about bypassing
> stac/clac. Could you please include a performance comparison with
> nosmap?
>
On machine without SMAP (Sandy Bridge):
Before: 4.8Mpps
After: 5.2Mpps
On machine with SMAP (Broadwell):
Before: 5.0Mpps
After: 6.1Mpps
No smap: 7.5Mpps
Thanks
2019 Apr 04
2
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...2K 7.7 8.5 25.0 28.3 29.3
For guest -> host I think is important the TCP_NODELAY test, because TCP
buffering increases a lot the throughput.
> One other comment: it makes sense to test with disabling smap
> mitigations (boot host and guest with nosmap). No problem with also
> testing the default smap path, but I think you will discover that the
> performance impact of smap hardening being enabled is often severe for
> such benchmarks.
Thanks for this valuable suggestion, I'll redo all the tests with nosmap!
Cheers,
Stefano
2019 Apr 04
2
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...2K 7.7 8.5 25.0 28.3 29.3
For guest -> host I think is important the TCP_NODELAY test, because TCP
buffering increases a lot the throughput.
> One other comment: it makes sense to test with disabling smap
> mitigations (boot host and guest with nosmap). No problem with also
> testing the default smap path, but I think you will discover that the
> performance impact of smap hardening being enabled is often severe for
> such benchmarks.
Thanks for this valuable suggestion, I'll redo all the tests with nosmap!
Cheers,
Stefano
2019 Jan 07
3
[RFC PATCH V3 0/5] Hi:
...erheads like checks, spec barriers or even hardware feature
>>>> toggling.
>>> Will review, thanks!
>>> One questions that comes to mind is whether it's all about bypassing
>>> stac/clac. Could you please include a performance comparison with
>>> nosmap?
>>>
>> On machine without SMAP (Sandy Bridge):
>>
>> Before: 4.8Mpps
>>
>> After: 5.2Mpps
> OK so would you say it's really unsafe versus safe accesses?
> Or would you say it's just a better written code?
It's the effect of removing specul...
2019 Jan 07
3
[RFC PATCH V3 0/5] Hi:
...erheads like checks, spec barriers or even hardware feature
>>>> toggling.
>>> Will review, thanks!
>>> One questions that comes to mind is whether it's all about bypassing
>>> stac/clac. Could you please include a performance comparison with
>>> nosmap?
>>>
>> On machine without SMAP (Sandy Bridge):
>>
>> Before: 4.8Mpps
>>
>> After: 5.2Mpps
> OK so would you say it's really unsafe versus safe accesses?
> Or would you say it's just a better written code?
It's the effect of removing specul...
2019 Apr 04
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...5.0 28.3 29.3
>
> For guest -> host I think is important the TCP_NODELAY test, because TCP
> buffering increases a lot the throughput.
>
> > One other comment: it makes sense to test with disabling smap
> > mitigations (boot host and guest with nosmap). No problem with also
> > testing the default smap path, but I think you will discover that the
> > performance impact of smap hardening being enabled is often severe for
> > such benchmarks.
>
> Thanks for this valuable suggestion, I'll redo all the tests with nosmap...
2019 Jul 30
1
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...> > v2: https://patchwork.kernel.org/cover/10938743
> > v1: https://patchwork.kernel.org/cover/10885431
> >
> > Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK
> > support. As Michael suggested in the v1, I booted host and guest with 'nosmap'.
> >
> > A brief description of patches:
> > - Patches 1: limit the memory usage with an extra copy for small packets
> > - Patches 2+3: reduce the number of credit update messages sent to the
> > transmitter
> > - Patches 4+5: allow the ho...
2019 Jan 02
0
[RFC PATCH V3 0/5] Hi:
...instead of copy_user() friends since they had too much
> overheads like checks, spec barriers or even hardware feature
> toggling.
Will review, thanks!
One questions that comes to mind is whether it's all about bypassing
stac/clac. Could you please include a performance comparison with
nosmap?
>
> Test shows about 24% improvement on TX PPS. It should benefit other
> cases as well.
>
> Changes from V2:
> - fix buggy range overlapping check
> - tear down MMU notifier during vhost ioctl to make sure invalidation
> request can read metadata userspace address and...
2019 Jan 07
0
[RFC PATCH V3 0/5] Hi:
...; > > overheads like checks, spec barriers or even hardware feature
> > > toggling.
> > Will review, thanks!
> > One questions that comes to mind is whether it's all about bypassing
> > stac/clac. Could you please include a performance comparison with
> > nosmap?
> >
>
> On machine without SMAP (Sandy Bridge):
>
> Before: 4.8Mpps
>
> After: 5.2Mpps
OK so would you say it's really unsafe versus safe accesses?
Or would you say it's just a better written code?
> On machine with SMAP (Broadwell):
>
> Before: 5.0M...
2019 Jan 07
0
[RFC PATCH V3 0/5] Hi:
...r even hardware feature
> > > > > toggling.
> > > > Will review, thanks!
> > > > One questions that comes to mind is whether it's all about bypassing
> > > > stac/clac. Could you please include a performance comparison with
> > > > nosmap?
> > > >
> > > On machine without SMAP (Sandy Bridge):
> > >
> > > Before: 4.8Mpps
> > >
> > > After: 5.2Mpps
> > OK so would you say it's really unsafe versus safe accesses?
> > Or would you say it's just a better writ...
2019 Jan 07
0
[RFC PATCH V3 1/5] vhost: generalize adding used elem
...tes in a single MOV.
>
> Removing the special casing also eliminates a few hundred bytes of code
> as well as the need for hardware to predict count==1 vs. count>1.
>
Yes, I don't measure, but STAC/CALC is pretty expensive when we are do
very small copies based on the result of nosmap PPS.
Thanks
2019 Apr 04
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...benefitial to add a column
with virtio-net+vhost-net performance.
This will both give us an idea about whether the vsock layer introduces
inefficiencies, and whether the virtio-net idea has merit.
One other comment: it makes sense to test with disabling smap
mitigations (boot host and guest with nosmap). No problem with also
testing the default smap path, but I think you will discover that the
performance impact of smap hardening being enabled is often severe for
such benchmarks.
> [1] https://www.spinics.net/lists/netdev/msg531783.html
> [2] https://github.com/stefano-garzarella/iperf/...
2019 Jul 29
0
[PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput
...over/10970145
>
> v2: https://patchwork.kernel.org/cover/10938743
>
> v1: https://patchwork.kernel.org/cover/10885431
>
> Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK
> support. As Micheal suggested in the v1, I booted host and guest with 'nosmap'.
>
> A brief description of patches:
> - Patches 1: limit the memory usage with an extra copy for small packets
> - Patches 2+3: reduce the number of credit update messages sent to the
> transmitter
> - Patches 4+5: allow the host to split packets on multipl...
2019 Jul 30
0
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...kernel.org/cover/10970145
> v2: https://patchwork.kernel.org/cover/10938743
> v1: https://patchwork.kernel.org/cover/10885431
>
> Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK
> support. As Michael suggested in the v1, I booted host and guest with 'nosmap'.
>
> A brief description of patches:
> - Patches 1: limit the memory usage with an extra copy for small packets
> - Patches 2+3: reduce the number of credit update messages sent to the
> transmitter
> - Patches 4+5: allow the host to split packets on multipl...
2019 Jul 30
7
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...17
v3: https://patchwork.kernel.org/cover/10970145
v2: https://patchwork.kernel.org/cover/10938743
v1: https://patchwork.kernel.org/cover/10885431
Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK
support. As Michael suggested in the v1, I booted host and guest with 'nosmap'.
A brief description of patches:
- Patches 1: limit the memory usage with an extra copy for small packets
- Patches 2+3: reduce the number of credit update messages sent to the
transmitter
- Patches 4+5: allow the host to split packets on multiple buffers and use...
2019 Jul 30
7
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...17
v3: https://patchwork.kernel.org/cover/10970145
v2: https://patchwork.kernel.org/cover/10938743
v1: https://patchwork.kernel.org/cover/10885431
Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK
support. As Michael suggested in the v1, I booted host and guest with 'nosmap'.
A brief description of patches:
- Patches 1: limit the memory usage with an extra copy for small packets
- Patches 2+3: reduce the number of credit update messages sent to the
transmitter
- Patches 4+5: allow the host to split packets on multiple buffers and use...
2018 Dec 26
2
[PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address
...7.5Mpps. (Vmap gives 6Mpps - 6.1Mpps, it
> only bypass SMAP for metadata).
>
> So it looks like for recent machine, SMAP becomes pain point when the copy
> is short (e.g 64B) for high PPS.
>
> Thanks
Thanks a lot for looking into this!
So first of all users can just boot with nosmap, right?
What's wrong with that? Yes it's not fine-grained but OTOH
it's easy to understand.
And I guess this confirms that if we are going to worry
about smap enabled, we need to look into packet copies
too, not just meta-data.
Vaguely could see a module option (off by default)
where...
2018 Dec 26
2
[PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address
...7.5Mpps. (Vmap gives 6Mpps - 6.1Mpps, it
> only bypass SMAP for metadata).
>
> So it looks like for recent machine, SMAP becomes pain point when the copy
> is short (e.g 64B) for high PPS.
>
> Thanks
Thanks a lot for looking into this!
So first of all users can just boot with nosmap, right?
What's wrong with that? Yes it's not fine-grained but OTOH
it's easy to understand.
And I guess this confirms that if we are going to worry
about smap enabled, we need to look into packet copies
too, not just meta-data.
Vaguely could see a module option (off by default)
where...
2019 Apr 04
15
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
This series tries to increase the throughput of virtio-vsock with slight
changes:
- patch 1/4: reduces the number of credit update messages sent to the
transmitter
- patch 2/4: allows the host to split packets on multiple buffers,
in this way, we can remove the packet size limit to
VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE
- patch 3/4: uses