jiangyiwen
2018-Nov-06 02:17 UTC
[PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock
On 2018/11/5 17:21, Jason Wang wrote:> > On 2018/11/5 ??3:43, jiangyiwen wrote: >> Now vsock only support send/receive small packet, it can't achieve >> high performance. As previous discussed with Jason Wang, I revisit the >> idea of vhost-net about mergeable rx buffer and implement the mergeable >> rx buffer in vhost-vsock, it can allow big packet to be scattered in >> into different buffers and improve performance obviously. >> >> I write a tool to test the vhost-vsock performance, mainly send big >> packet(64K) included guest->Host and Host->Guest. The result as >> follows: >> >> Before performance: >> Single socket Multiple sockets(Max Bandwidth) >> Guest->Host ~400MB/s ~480MB/s >> Host->Guest ~1450MB/s ~1600MB/s >> >> After performance: >> Single socket Multiple sockets(Max Bandwidth) >> Guest->Host ~1700MB/s ~2900MB/s >> Host->Guest ~1700MB/s ~2900MB/s >> >> From the test results, the performance is improved obviously, and guest >> memory will not be wasted. > > > Hi: > > Thanks for the patches and the numbers are really impressive. > > But instead of duplicating codes between sock and net. I was considering to use virtio-net as a transport of vsock. Then we may have all existed features likes batching, mergeable rx buffers and multiqueue. Want to consider this idea? Thoughts? > >Hi Jason, I am not very familiar with virtio-net, so I am afraid I can't give too much effective advice. Then I have several problems: 1. If use virtio-net as a transport, guest should see a virtio-net device instead of virtio-vsock device, right? Is vsock only as a transport between socket and net_device? User should still use AF_VSOCK type to create socket, right? 2. I want to know if this idea has already started, and how is the current progress? 3. And what is stefan's idea? Thanks, Yiwen.>> >> --- >> >> Yiwen Jiang (5): >> VSOCK: support fill mergeable rx buffer in guest >> VSOCK: support fill data to mergeable rx buffer in host >> VSOCK: support receive mergeable rx buffer in guest >> VSOCK: modify default rx buf size to improve performance >> VSOCK: batch sending rx buffer to increase bandwidth >> >> drivers/vhost/vsock.c | 135 +++++++++++++++++++++++------ >> include/linux/virtio_vsock.h | 15 +++- >> include/uapi/linux/virtio_vsock.h | 5 ++ >> net/vmw_vsock/virtio_transport.c | 147 ++++++++++++++++++++++++++------ >> net/vmw_vsock/virtio_transport_common.c | 59 +++++++++++-- >> 5 files changed, 300 insertions(+), 61 deletions(-) >> > > . >
Jason Wang
2018-Nov-06 02:41 UTC
[PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock
On 2018/11/6 ??10:17, jiangyiwen wrote:> On 2018/11/5 17:21, Jason Wang wrote: >> On 2018/11/5 ??3:43, jiangyiwen wrote: >>> Now vsock only support send/receive small packet, it can't achieve >>> high performance. As previous discussed with Jason Wang, I revisit the >>> idea of vhost-net about mergeable rx buffer and implement the mergeable >>> rx buffer in vhost-vsock, it can allow big packet to be scattered in >>> into different buffers and improve performance obviously. >>> >>> I write a tool to test the vhost-vsock performance, mainly send big >>> packet(64K) included guest->Host and Host->Guest. The result as >>> follows: >>> >>> Before performance: >>> Single socket Multiple sockets(Max Bandwidth) >>> Guest->Host ~400MB/s ~480MB/s >>> Host->Guest ~1450MB/s ~1600MB/s >>> >>> After performance: >>> Single socket Multiple sockets(Max Bandwidth) >>> Guest->Host ~1700MB/s ~2900MB/s >>> Host->Guest ~1700MB/s ~2900MB/s >>> >>> From the test results, the performance is improved obviously, and guest >>> memory will not be wasted. >> Hi: >> >> Thanks for the patches and the numbers are really impressive. >> >> But instead of duplicating codes between sock and net. I was considering to use virtio-net as a transport of vsock. Then we may have all existed features likes batching, mergeable rx buffers and multiqueue. Want to consider this idea? Thoughts? >> >> > Hi Jason, > > I am not very familiar with virtio-net, so I am afraid I can't give too > much effective advice. Then I have several problems: > > 1. If use virtio-net as a transport, guest should see a virtio-net > device instead of virtio-vsock device, right? Is vsock only as a > transport between socket and net_device? User should still use > AF_VSOCK type to create socket, right?Well, there're many choices. What you need is just to keep the socket API and hide the implementation. For example, you can keep the vosck device in guest and switch to use vhost-net in host. We probably need a new feature bit or header to let vhost know we are passing vsock packet. And vhost-net could forward the packet to vsock core on host.> > 2. I want to know if this idea has already started, and how is > the current progress?Not yet started.? Just want to listen from the community. If this sounds good, do you have interest in implementing this?> > 3. And what is stefan's idea?Talk with Stefan a little on this during KVM Forum. I think he tends to agree on this idea. Anyway, let's wait for his reply. Thanks> > Thanks, > Yiwen. >
jiangyiwen
2018-Nov-06 03:17 UTC
[PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock
On 2018/11/6 10:41, Jason Wang wrote:> > On 2018/11/6 ??10:17, jiangyiwen wrote: >> On 2018/11/5 17:21, Jason Wang wrote: >>> On 2018/11/5 ??3:43, jiangyiwen wrote: >>>> Now vsock only support send/receive small packet, it can't achieve >>>> high performance. As previous discussed with Jason Wang, I revisit the >>>> idea of vhost-net about mergeable rx buffer and implement the mergeable >>>> rx buffer in vhost-vsock, it can allow big packet to be scattered in >>>> into different buffers and improve performance obviously. >>>> >>>> I write a tool to test the vhost-vsock performance, mainly send big >>>> packet(64K) included guest->Host and Host->Guest. The result as >>>> follows: >>>> >>>> Before performance: >>>> Single socket Multiple sockets(Max Bandwidth) >>>> Guest->Host ~400MB/s ~480MB/s >>>> Host->Guest ~1450MB/s ~1600MB/s >>>> >>>> After performance: >>>> Single socket Multiple sockets(Max Bandwidth) >>>> Guest->Host ~1700MB/s ~2900MB/s >>>> Host->Guest ~1700MB/s ~2900MB/s >>>> >>>> From the test results, the performance is improved obviously, and guest >>>> memory will not be wasted. >>> Hi: >>> >>> Thanks for the patches and the numbers are really impressive. >>> >>> But instead of duplicating codes between sock and net. I was considering to use virtio-net as a transport of vsock. Then we may have all existed features likes batching, mergeable rx buffers and multiqueue. Want to consider this idea? Thoughts? >>> >>> >> Hi Jason, >> >> I am not very familiar with virtio-net, so I am afraid I can't give too >> much effective advice. Then I have several problems: >> >> 1. If use virtio-net as a transport, guest should see a virtio-net >> device instead of virtio-vsock device, right? Is vsock only as a >> transport between socket and net_device? User should still use >> AF_VSOCK type to create socket, right? > > > Well, there're many choices. What you need is just to keep the socket API and hide the implementation. For example, you can keep the vosck device in guest and switch to use vhost-net in host. We probably need a new feature bit or header to let vhost know we are passing vsock packet. And vhost-net could forward the packet to vsock core on host. > > >> >> 2. I want to know if this idea has already started, and how is >> the current progress? > > > Not yet started. Just want to listen from the community. If this sounds good, do you have interest in implementing this? > > >> >> 3. And what is stefan's idea? > > > Talk with Stefan a little on this during KVM Forum. I think he tends to agree on this idea. Anyway, let's wait for his reply. > > > Thanks > >Hi Jason, Thanks your reply, what you want is try to avoid duplicate code, and still use the existed features with virtio-net. Yes, if this sounds good and most people can recognize this idea, I am very happy to implement this. In addition, I hope you can review these patches before the new idea is implemented, after all the performance can be improved. :-) Thanks, Yiwen.>> >> Thanks, >> Yiwen. >> > > . >
Reasonably Related Threads
- [PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock
- [PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock
- [PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock
- [PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock
- [PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock