Stefano Garzarella
2019-Apr-05 07:49 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Thu, Apr 04, 2019 at 02:04:10PM -0400, Michael S. Tsirkin wrote:> On Thu, Apr 04, 2019 at 06:47:15PM +0200, Stefano Garzarella wrote: > > On Thu, Apr 04, 2019 at 11:52:46AM -0400, Michael S. Tsirkin wrote: > > > I simply love it that you have analysed the individual impact of > > > each patch! Great job! > > > > Thanks! I followed Stefan's suggestions! > > > > > > > > For comparison's sake, it could be IMHO benefitial to add a column > > > with virtio-net+vhost-net performance. > > > > > > This will both give us an idea about whether the vsock layer introduces > > > inefficiencies, and whether the virtio-net idea has merit. > > > > > > > Sure, I already did TCP tests on virtio-net + vhost, starting qemu in > > this way: > > $ qemu-system-x86_64 ... \ > > -netdev tap,id=net0,vhost=on,ifname=tap0,script=no,downscript=no \ > > -device virtio-net-pci,netdev=net0 > > > > I did also a test using TCP_NODELAY, just to be fair, because VSOCK > > doesn't implement something like this. > > Why not? >I think because originally VSOCK was designed to be simple and low-latency, but of course we can introduce something like that. Current implementation directly copy the buffer from the user-space in a virtio_vsock_pkt and enqueue it to be transmitted. Maybe we can introduce a buffer per socket where accumulate bytes and send it when it is full or when a timer is fired . We can also introduce a VSOCK_NODELAY (maybe using the same value of TCP_NODELAY for compatibility) to send the buffer immediately for low-latency use cases. What do you think? Thanks, Stefano
Stefan Hajnoczi
2019-Apr-08 09:23 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Fri, Apr 05, 2019 at 09:49:17AM +0200, Stefano Garzarella wrote:> On Thu, Apr 04, 2019 at 02:04:10PM -0400, Michael S. Tsirkin wrote: > > On Thu, Apr 04, 2019 at 06:47:15PM +0200, Stefano Garzarella wrote: > > > On Thu, Apr 04, 2019 at 11:52:46AM -0400, Michael S. Tsirkin wrote: > > > > I simply love it that you have analysed the individual impact of > > > > each patch! Great job! > > > > > > Thanks! I followed Stefan's suggestions! > > > > > > > > > > > For comparison's sake, it could be IMHO benefitial to add a column > > > > with virtio-net+vhost-net performance. > > > > > > > > This will both give us an idea about whether the vsock layer introduces > > > > inefficiencies, and whether the virtio-net idea has merit. > > > > > > > > > > Sure, I already did TCP tests on virtio-net + vhost, starting qemu in > > > this way: > > > $ qemu-system-x86_64 ... \ > > > -netdev tap,id=net0,vhost=on,ifname=tap0,script=no,downscript=no \ > > > -device virtio-net-pci,netdev=net0 > > > > > > I did also a test using TCP_NODELAY, just to be fair, because VSOCK > > > doesn't implement something like this. > > > > Why not? > > > > I think because originally VSOCK was designed to be simple and > low-latency, but of course we can introduce something like that. > > Current implementation directly copy the buffer from the user-space in a > virtio_vsock_pkt and enqueue it to be transmitted. > > Maybe we can introduce a buffer per socket where accumulate bytes and > send it when it is full or when a timer is fired . We can also introduce > a VSOCK_NODELAY (maybe using the same value of TCP_NODELAY for > compatibility) to send the buffer immediately for low-latency use cases. > > What do you think?Today virtio-vsock implements a 1:1 sendmsg():packet relationship because it's simple. But there's no need for the guest to enqueue multiple VIRTIO_VSOCK_OP_RW packets when a single large packet could combine all payloads for a connection. This is not the same as TCP_NODELAY but related. I think it's worth exploring TCP_NODELAY and send_pkt_list merging. Hopefully it won't make the code much more complicated. Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20190408/529553b4/attachment.sig>
Maybe Matching Threads
- [PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
- [PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
- [PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
- [PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
- [PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput