Stefano Garzarella
2019-Apr-04 10:58 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
This series tries to increase the throughput of virtio-vsock with slight changes: - patch 1/4: reduces the number of credit update messages sent to the transmitter - patch 2/4: allows the host to split packets on multiple buffers, in this way, we can remove the packet size limit to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE - patch 3/4: uses VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed - patch 4/4: increases RX buffer size to 64 KiB (affects only host->guest) RFC: - maybe patch 4 can be replaced with multiple queues with different buffer sizes or using EWMA to adapt the buffer size to the traffic - as Jason suggested in a previous thread [1] I'll evaluate to use virtio-net as transport, but I need to understand better how to interface with it, maybe introducing sk_buff in virtio-vsock. Any suggestions? Here some benchmarks step by step. I used iperf3 [2] modified with VSOCK support: host -> guest [Gbps] pkt_size before opt. patch 1 patches 2+3 patch 4 64 0.060 0.102 0.102 0.096 256 0.22 0.40 0.40 0.36 512 0.42 0.82 0.85 0.74 1K 0.7 1.6 1.6 1.5 2K 1.5 3.0 3.1 2.9 4K 2.5 5.2 5.3 5.3 8K 3.9 8.4 8.6 8.8 16K 6.6 11.1 11.3 12.8 32K 9.9 15.8 15.8 18.1 64K 13.5 17.4 17.7 21.4 128K 17.9 19.0 19.0 23.6 256K 18.0 19.4 19.8 24.4 512K 18.4 19.6 20.1 25.3 guest -> host [Gbps] pkt_size before opt. patch 1 patches 2+3 64 0.088 0.100 0.101 256 0.35 0.36 0.41 512 0.70 0.74 0.73 1K 1.1 1.3 1.3 2K 2.4 2.4 2.6 4K 4.3 4.3 4.5 8K 7.3 7.4 7.6 16K 9.2 9.6 11.1 32K 8.3 8.9 18.1 64K 8.3 8.9 25.4 128K 7.2 8.7 26.7 256K 7.7 8.4 24.9 512K 7.7 8.5 25.0 Thanks, Stefano [1] https://www.spinics.net/lists/netdev/msg531783.html [2] https://github.com/stefano-garzarella/iperf/ Stefano Garzarella (4): vsock/virtio: reduce credit update messages vhost/vsock: split packets to send using multiple buffers vsock/virtio: change the maximum packet size allowed vsock/virtio: increase RX buffer size to 64 KiB drivers/vhost/vsock.c | 35 ++++++++++++++++++++----- include/linux/virtio_vsock.h | 3 ++- net/vmw_vsock/virtio_transport_common.c | 18 +++++++++---- 3 files changed, 44 insertions(+), 12 deletions(-) -- 2.20.1
Stefano Garzarella
2019-Apr-04 10:58 UTC
[PATCH RFC 1/4] vsock/virtio: reduce credit update messages
In order to reduce the number of credit update messages, we send them only when the space available seen by the transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- include/linux/virtio_vsock.h | 1 + net/vmw_vsock/virtio_transport_common.c | 14 +++++++++++--- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index e223e2632edd..6d7a22cc20bf 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -37,6 +37,7 @@ struct virtio_vsock_sock { u32 tx_cnt; u32 buf_alloc; u32 peer_fwd_cnt; + u32 last_fwd_cnt; u32 peer_buf_alloc; /* Protected by rx_lock */ diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index 602715fc9a75..f32301d823f5 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -206,6 +206,7 @@ static void virtio_transport_dec_rx_pkt(struct virtio_vsock_sock *vvs, void virtio_transport_inc_tx_pkt(struct virtio_vsock_sock *vvs, struct virtio_vsock_pkt *pkt) { spin_lock_bh(&vvs->tx_lock); + vvs->last_fwd_cnt = vvs->fwd_cnt; pkt->hdr.fwd_cnt = cpu_to_le32(vvs->fwd_cnt); pkt->hdr.buf_alloc = cpu_to_le32(vvs->buf_alloc); spin_unlock_bh(&vvs->tx_lock); @@ -256,6 +257,7 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk, struct virtio_vsock_sock *vvs = vsk->trans; struct virtio_vsock_pkt *pkt; size_t bytes, total = 0; + s64 free_space; int err = -EFAULT; spin_lock_bh(&vvs->rx_lock); @@ -288,9 +290,15 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk, } spin_unlock_bh(&vvs->rx_lock); - /* Send a credit pkt to peer */ - virtio_transport_send_credit_update(vsk, VIRTIO_VSOCK_TYPE_STREAM, - NULL); + /* We send a credit update only when the space available seen + * by the transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE + */ + free_space = vvs->buf_alloc - (vvs->fwd_cnt - vvs->last_fwd_cnt); + if (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) { + virtio_transport_send_credit_update(vsk, + VIRTIO_VSOCK_TYPE_STREAM, + NULL); + } return total; -- 2.20.1
Stefano Garzarella
2019-Apr-04 10:58 UTC
[PATCH RFC 2/4] vhost/vsock: split packets to send using multiple buffers
If the packets to sent to the guest are bigger than the buffer available, we can split them, using multiple buffers and fixing the length in the packet header. This is safe since virtio-vsock supports only stream sockets. Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- drivers/vhost/vsock.c | 35 +++++++++++++++++++++++++++++------ 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index bb5fc0e9fbc2..9951b7e661f6 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -94,7 +94,7 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, struct iov_iter iov_iter; unsigned out, in; size_t nbytes; - size_t len; + size_t iov_len, payload_len; int head; spin_lock_bh(&vsock->send_pkt_list_lock); @@ -139,8 +139,18 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, break; } - len = iov_length(&vq->iov[out], in); - iov_iter_init(&iov_iter, READ, &vq->iov[out], in, len); + payload_len = pkt->len - pkt->off; + iov_len = iov_length(&vq->iov[out], in); + iov_iter_init(&iov_iter, READ, &vq->iov[out], in, iov_len); + + /* If the packet is greater than the space available in the + * buffer, we split it using multiple buffers. + */ + if (payload_len > iov_len - sizeof(pkt->hdr)) + payload_len = iov_len - sizeof(pkt->hdr); + + /* Set the correct length in the header */ + pkt->hdr.len = cpu_to_le32(payload_len); nbytes = copy_to_iter(&pkt->hdr, sizeof(pkt->hdr), &iov_iter); if (nbytes != sizeof(pkt->hdr)) { @@ -149,16 +159,29 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, break; } - nbytes = copy_to_iter(pkt->buf, pkt->len, &iov_iter); - if (nbytes != pkt->len) { + nbytes = copy_to_iter(pkt->buf + pkt->off, payload_len, + &iov_iter); + if (nbytes != payload_len) { virtio_transport_free_pkt(pkt); vq_err(vq, "Faulted on copying pkt buf\n"); break; } - vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len); + vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len); added = true; + pkt->off += payload_len; + + /* If we didn't send all the payload we can requeue the packet + * to send it with the next available buffer. + */ + if (pkt->off < pkt->len) { + spin_lock_bh(&vsock->send_pkt_list_lock); + list_add(&pkt->list, &vsock->send_pkt_list); + spin_unlock_bh(&vsock->send_pkt_list_lock); + continue; + } + if (pkt->reply) { int val; -- 2.20.1
Stefano Garzarella
2019-Apr-04 10:58 UTC
[PATCH RFC 3/4] vsock/virtio: change the maximum packet size allowed
Since now we are able to split packets, we can avoid limiting their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE. Instead, we can use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size. Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- net/vmw_vsock/virtio_transport_common.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index f32301d823f5..822e5d07a4ec 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -167,8 +167,8 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, vvs = vsk->trans; /* we can send less than pkt_len bytes */ - if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE) - pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE; + if (pkt_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) + pkt_len = VIRTIO_VSOCK_MAX_PKT_BUF_SIZE; /* virtio_transport_get_credit might return less than pkt_len credit */ pkt_len = virtio_transport_get_credit(vvs, pkt_len); -- 2.20.1
Stefano Garzarella
2019-Apr-04 10:58 UTC
[PATCH RFC 4/4] vsock/virtio: increase RX buffer size to 64 KiB
In order to increase host -> guest throughput with large packets, we can use 64 KiB RX buffers. Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- include/linux/virtio_vsock.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index 6d7a22cc20bf..43cce304408e 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -10,7 +10,7 @@ #define VIRTIO_VSOCK_DEFAULT_MIN_BUF_SIZE 128 #define VIRTIO_VSOCK_DEFAULT_BUF_SIZE (1024 * 256) #define VIRTIO_VSOCK_DEFAULT_MAX_BUF_SIZE (1024 * 256) -#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4) +#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 64) #define VIRTIO_VSOCK_MAX_BUF_SIZE 0xFFFFFFFFUL #define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64) -- 2.20.1
Stefan Hajnoczi
2019-Apr-04 14:14 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Thu, Apr 04, 2019 at 12:58:34PM +0200, Stefano Garzarella wrote:> This series tries to increase the throughput of virtio-vsock with slight > changes: > - patch 1/4: reduces the number of credit update messages sent to the > transmitter > - patch 2/4: allows the host to split packets on multiple buffers, > in this way, we can remove the packet size limit to > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE > - patch 3/4: uses VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size > allowed > - patch 4/4: increases RX buffer size to 64 KiB (affects only host->guest) > > RFC: > - maybe patch 4 can be replaced with multiple queues with different > buffer sizes or using EWMA to adapt the buffer size to the traffic > > - as Jason suggested in a previous thread [1] I'll evaluate to use > virtio-net as transport, but I need to understand better how to > interface with it, maybe introducing sk_buff in virtio-vsock. > > Any suggestions?Great performance results, nice job! Please include efficiency numbers (bandwidth / CPU utilization) in the future. Due to the nature of these optimizations it's unlikely that efficiency has decreased, so I'm not too worried about it this time. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20190404/81cb32a4/attachment.sig>
Stefano Garzarella
2019-Apr-04 15:44 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Thu, Apr 04, 2019 at 03:14:10PM +0100, Stefan Hajnoczi wrote:> On Thu, Apr 04, 2019 at 12:58:34PM +0200, Stefano Garzarella wrote: > > This series tries to increase the throughput of virtio-vsock with slight > > changes: > > - patch 1/4: reduces the number of credit update messages sent to the > > transmitter > > - patch 2/4: allows the host to split packets on multiple buffers, > > in this way, we can remove the packet size limit to > > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE > > - patch 3/4: uses VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size > > allowed > > - patch 4/4: increases RX buffer size to 64 KiB (affects only host->guest) > > > > RFC: > > - maybe patch 4 can be replaced with multiple queues with different > > buffer sizes or using EWMA to adapt the buffer size to the traffic > > > > - as Jason suggested in a previous thread [1] I'll evaluate to use > > virtio-net as transport, but I need to understand better how to > > interface with it, maybe introducing sk_buff in virtio-vsock. > > > > Any suggestions? > > Great performance results, nice job!:)> > Please include efficiency numbers (bandwidth / CPU utilization) in the > future. Due to the nature of these optimizations it's unlikely that > efficiency has decreased, so I'm not too worried about it this time.Thanks for the suggestion! I'll measure also the efficiency for future optimizations. Cheers, Stefano
Michael S. Tsirkin
2019-Apr-04 15:52 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Thu, Apr 04, 2019 at 12:58:34PM +0200, Stefano Garzarella wrote:> This series tries to increase the throughput of virtio-vsock with slight > changes: > - patch 1/4: reduces the number of credit update messages sent to the > transmitter > - patch 2/4: allows the host to split packets on multiple buffers, > in this way, we can remove the packet size limit to > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE > - patch 3/4: uses VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size > allowed > - patch 4/4: increases RX buffer size to 64 KiB (affects only host->guest) > > RFC: > - maybe patch 4 can be replaced with multiple queues with different > buffer sizes or using EWMA to adapt the buffer size to the traffic > > - as Jason suggested in a previous thread [1] I'll evaluate to use > virtio-net as transport, but I need to understand better how to > interface with it, maybe introducing sk_buff in virtio-vsock. > > Any suggestions? > > Here some benchmarks step by step. I used iperf3 [2] modified with VSOCK > support: > > host -> guest [Gbps] > pkt_size before opt. patch 1 patches 2+3 patch 4 > 64 0.060 0.102 0.102 0.096 > 256 0.22 0.40 0.40 0.36 > 512 0.42 0.82 0.85 0.74 > 1K 0.7 1.6 1.6 1.5 > 2K 1.5 3.0 3.1 2.9 > 4K 2.5 5.2 5.3 5.3 > 8K 3.9 8.4 8.6 8.8 > 16K 6.6 11.1 11.3 12.8 > 32K 9.9 15.8 15.8 18.1 > 64K 13.5 17.4 17.7 21.4 > 128K 17.9 19.0 19.0 23.6 > 256K 18.0 19.4 19.8 24.4 > 512K 18.4 19.6 20.1 25.3 > > guest -> host [Gbps] > pkt_size before opt. patch 1 patches 2+3 > 64 0.088 0.100 0.101 > 256 0.35 0.36 0.41 > 512 0.70 0.74 0.73 > 1K 1.1 1.3 1.3 > 2K 2.4 2.4 2.6 > 4K 4.3 4.3 4.5 > 8K 7.3 7.4 7.6 > 16K 9.2 9.6 11.1 > 32K 8.3 8.9 18.1 > 64K 8.3 8.9 25.4 > 128K 7.2 8.7 26.7 > 256K 7.7 8.4 24.9 > 512K 7.7 8.5 25.0 > > Thanks, > StefanoI simply love it that you have analysed the individual impact of each patch! Great job! For comparison's sake, it could be IMHO benefitial to add a column with virtio-net+vhost-net performance. This will both give us an idea about whether the vsock layer introduces inefficiencies, and whether the virtio-net idea has merit. One other comment: it makes sense to test with disabling smap mitigations (boot host and guest with nosmap). No problem with also testing the default smap path, but I think you will discover that the performance impact of smap hardening being enabled is often severe for such benchmarks.> [1] https://www.spinics.net/lists/netdev/msg531783.html > [2] https://github.com/stefano-garzarella/iperf/ > > Stefano Garzarella (4): > vsock/virtio: reduce credit update messages > vhost/vsock: split packets to send using multiple buffers > vsock/virtio: change the maximum packet size allowed > vsock/virtio: increase RX buffer size to 64 KiB > > drivers/vhost/vsock.c | 35 ++++++++++++++++++++----- > include/linux/virtio_vsock.h | 3 ++- > net/vmw_vsock/virtio_transport_common.c | 18 +++++++++---- > 3 files changed, 44 insertions(+), 12 deletions(-) > > -- > 2.20.1
Stefano Garzarella
2019-Apr-04 16:47 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Thu, Apr 04, 2019 at 11:52:46AM -0400, Michael S. Tsirkin wrote:> I simply love it that you have analysed the individual impact of > each patch! Great job!Thanks! I followed Stefan's suggestions!> > For comparison's sake, it could be IMHO benefitial to add a column > with virtio-net+vhost-net performance. > > This will both give us an idea about whether the vsock layer introduces > inefficiencies, and whether the virtio-net idea has merit. >Sure, I already did TCP tests on virtio-net + vhost, starting qemu in this way: $ qemu-system-x86_64 ... \ -netdev tap,id=net0,vhost=on,ifname=tap0,script=no,downscript=no \ -device virtio-net-pci,netdev=net0 I did also a test using TCP_NODELAY, just to be fair, because VSOCK doesn't implement something like this. In both cases I set the MTU to the maximum allowed (65520). VSOCK TCP + virtio-net + vhost host -> guest [Gbps] host -> guest [Gbps] pkt_size before opt. patch 1 patches 2+3 patch 4 TCP_NODELAY 64 0.060 0.102 0.102 0.096 0.16 0.15 256 0.22 0.40 0.40 0.36 0.32 0.57 512 0.42 0.82 0.85 0.74 1.2 1.2 1K 0.7 1.6 1.6 1.5 2.1 2.1 2K 1.5 3.0 3.1 2.9 3.5 3.4 4K 2.5 5.2 5.3 5.3 5.5 5.3 8K 3.9 8.4 8.6 8.8 8.0 7.9 16K 6.6 11.1 11.3 12.8 9.8 10.2 32K 9.9 15.8 15.8 18.1 11.8 10.7 64K 13.5 17.4 17.7 21.4 11.4 11.3 128K 17.9 19.0 19.0 23.6 11.2 11.0 256K 18.0 19.4 19.8 24.4 11.1 11.0 512K 18.4 19.6 20.1 25.3 10.1 10.7 For small packet size (< 4K) I think we should implement some kind of batching/merging, that could be for free if we use virtio-net as a transport. Note: Maybe I have something miss configured because TCP on virtio-net for host -> guest case doesn't exceed 11 Gbps. VSOCK TCP + virtio-net + vhost guest -> host [Gbps] guest -> host [Gbps] pkt_size before opt. patch 1 patches 2+3 TCP_NODELAY 64 0.088 0.100 0.101 0.24 0.24 256 0.35 0.36 0.41 0.36 1.03 512 0.70 0.74 0.73 0.69 1.6 1K 1.1 1.3 1.3 1.1 3.0 2K 2.4 2.4 2.6 2.1 5.5 4K 4.3 4.3 4.5 3.8 8.8 8K 7.3 7.4 7.6 6.6 20.0 16K 9.2 9.6 11.1 12.3 29.4 32K 8.3 8.9 18.1 19.3 28.2 64K 8.3 8.9 25.4 20.6 28.7 128K 7.2 8.7 26.7 23.1 27.9 256K 7.7 8.4 24.9 28.5 29.4 512K 7.7 8.5 25.0 28.3 29.3 For guest -> host I think is important the TCP_NODELAY test, because TCP buffering increases a lot the throughput.> One other comment: it makes sense to test with disabling smap > mitigations (boot host and guest with nosmap). No problem with also > testing the default smap path, but I think you will discover that the > performance impact of smap hardening being enabled is often severe for > such benchmarks.Thanks for this valuable suggestion, I'll redo all the tests with nosmap! Cheers, Stefano
Stefan Hajnoczi
2019-Apr-04 19:15 UTC
[PATCH RFC 1/4] vsock/virtio: reduce credit update messages
On Thu, Apr 04, 2019 at 12:58:35PM +0200, Stefano Garzarella wrote:> @@ -256,6 +257,7 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk, > struct virtio_vsock_sock *vvs = vsk->trans; > struct virtio_vsock_pkt *pkt; > size_t bytes, total = 0; > + s64 free_space;Why s64? buf_alloc, fwd_cnt, and last_fwd_cnt are all u32. fwd_cnt - last_fwd_cnt <= buf_alloc is always true.> int err = -EFAULT; > > spin_lock_bh(&vvs->rx_lock); > @@ -288,9 +290,15 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk, > } > spin_unlock_bh(&vvs->rx_lock); > > - /* Send a credit pkt to peer */ > - virtio_transport_send_credit_update(vsk, VIRTIO_VSOCK_TYPE_STREAM, > - NULL); > + /* We send a credit update only when the space available seen > + * by the transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE > + */ > + free_space = vvs->buf_alloc - (vvs->fwd_cnt - vvs->last_fwd_cnt);Locking? These fields should be accessed under tx_lock.> + if (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) { > + virtio_transport_send_credit_update(vsk, > + VIRTIO_VSOCK_TYPE_STREAM, > + NULL); > + }-------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20190404/811f9231/attachment.sig>
Stefan Hajnoczi
2019-Apr-05 08:13 UTC
[PATCH RFC 2/4] vhost/vsock: split packets to send using multiple buffers
On Thu, Apr 04, 2019 at 12:58:36PM +0200, Stefano Garzarella wrote:> @@ -139,8 +139,18 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > break; > } > > - len = iov_length(&vq->iov[out], in); > - iov_iter_init(&iov_iter, READ, &vq->iov[out], in, len); > + payload_len = pkt->len - pkt->off; > + iov_len = iov_length(&vq->iov[out], in); > + iov_iter_init(&iov_iter, READ, &vq->iov[out], in, iov_len); > + > + /* If the packet is greater than the space available in the > + * buffer, we split it using multiple buffers. > + */ > + if (payload_len > iov_len - sizeof(pkt->hdr))Integer underflow. iov_len is controlled by the guest and therefore untrusted. Please validate iov_len before assuming it's larger than sizeof(pkt->hdr).> - vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len); > + vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len); > added = true; > > + pkt->off += payload_len; > + > + /* If we didn't send all the payload we can requeue the packet > + * to send it with the next available buffer. > + */ > + if (pkt->off < pkt->len) { > + spin_lock_bh(&vsock->send_pkt_list_lock); > + list_add(&pkt->list, &vsock->send_pkt_list); > + spin_unlock_bh(&vsock->send_pkt_list_lock); > + continue;The virtio_transport_deliver_tap_pkt() call is skipped. Packet capture should see the exact packets that are delivered. I think this patch will present one large packet instead of several smaller packets that were actually delivered. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20190405/4998c18b/attachment.sig>
Stefan Hajnoczi
2019-Apr-05 08:24 UTC
[PATCH RFC 3/4] vsock/virtio: change the maximum packet size allowed
On Thu, Apr 04, 2019 at 12:58:37PM +0200, Stefano Garzarella wrote:> Since now we are able to split packets, we can avoid limiting > their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE. > Instead, we can use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max > packet size. > > Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> > --- > net/vmw_vsock/virtio_transport_common.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c > index f32301d823f5..822e5d07a4ec 100644 > --- a/net/vmw_vsock/virtio_transport_common.c > +++ b/net/vmw_vsock/virtio_transport_common.c > @@ -167,8 +167,8 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, > vvs = vsk->trans; > > /* we can send less than pkt_len bytes */ > - if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE) > - pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE; > + if (pkt_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) > + pkt_len = VIRTIO_VSOCK_MAX_PKT_BUF_SIZE;The next line limits pkt_len based on available credits: /* virtio_transport_get_credit might return less than pkt_len credit */ pkt_len = virtio_transport_get_credit(vvs, pkt_len); I think drivers/vhost/vsock.c:vhost_transport_do_send_pkt() now works correctly even with pkt_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. The other ->send_pkt() callback is net/vmw_vsock/virtio_transport.c:virtio_transport_send_pkt_work() and it can already send any size packet. Do you remember why VIRTIO_VSOCK_MAX_PKT_BUF_SIZE still needs to be the limit? I'm wondering if we can get rid of it now and just limit packets to the available credits. Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20190405/0c6f4335/attachment.sig>
Stefan Hajnoczi
2019-Apr-05 08:44 UTC
[PATCH RFC 4/4] vsock/virtio: increase RX buffer size to 64 KiB
On Thu, Apr 04, 2019 at 12:58:38PM +0200, Stefano Garzarella wrote:> In order to increase host -> guest throughput with large packets, > we can use 64 KiB RX buffers. > > Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> > --- > include/linux/virtio_vsock.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h > index 6d7a22cc20bf..43cce304408e 100644 > --- a/include/linux/virtio_vsock.h > +++ b/include/linux/virtio_vsock.h > @@ -10,7 +10,7 @@ > #define VIRTIO_VSOCK_DEFAULT_MIN_BUF_SIZE 128 > #define VIRTIO_VSOCK_DEFAULT_BUF_SIZE (1024 * 256) > #define VIRTIO_VSOCK_DEFAULT_MAX_BUF_SIZE (1024 * 256) > -#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4) > +#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 64)This patch raises rx ring memory consumption from 128 * 4KB = 512KB to 128 * 64KB = 8MB. Michael, Jason: Any advice regarding rx/tx ring sizes and buffer sizes? Depending on rx ring size and the workload's packet size, different values might be preferred. This could become a tunable in the future. It determines the size of the guest driver's rx buffers. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20190405/55995683/attachment.sig>
Jason Wang
2019-Apr-08 06:43 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On 2019/4/4 ??6:58, Stefano Garzarella wrote:> This series tries to increase the throughput of virtio-vsock with slight > changes: > - patch 1/4: reduces the number of credit update messages sent to the > transmitter > - patch 2/4: allows the host to split packets on multiple buffers, > in this way, we can remove the packet size limit to > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE > - patch 3/4: uses VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size > allowed > - patch 4/4: increases RX buffer size to 64 KiB (affects only host->guest) > > RFC: > - maybe patch 4 can be replaced with multiple queues with different > buffer sizes or using EWMA to adapt the buffer size to the trafficOr EWMA + mergeable rx buffer, but if we decide to unify the datapath with virtio-net, we can reuse their codes.> > - as Jason suggested in a previous thread [1] I'll evaluate to use > virtio-net as transport, but I need to understand better how to > interface with it, maybe introducing sk_buff in virtio-vsock. > > Any suggestions?My understanding is this is not a must, but if it makes things easier, we can do this. Another thing that may help is to implement sendpage(), which will greatly improve the performance. Thanks> > Here some benchmarks step by step. I used iperf3 [2] modified with VSOCK > support: > > host -> guest [Gbps] > pkt_size before opt. patch 1 patches 2+3 patch 4 > 64 0.060 0.102 0.102 0.096 > 256 0.22 0.40 0.40 0.36 > 512 0.42 0.82 0.85 0.74 > 1K 0.7 1.6 1.6 1.5 > 2K 1.5 3.0 3.1 2.9 > 4K 2.5 5.2 5.3 5.3 > 8K 3.9 8.4 8.6 8.8 > 16K 6.6 11.1 11.3 12.8 > 32K 9.9 15.8 15.8 18.1 > 64K 13.5 17.4 17.7 21.4 > 128K 17.9 19.0 19.0 23.6 > 256K 18.0 19.4 19.8 24.4 > 512K 18.4 19.6 20.1 25.3 > > guest -> host [Gbps] > pkt_size before opt. patch 1 patches 2+3 > 64 0.088 0.100 0.101 > 256 0.35 0.36 0.41 > 512 0.70 0.74 0.73 > 1K 1.1 1.3 1.3 > 2K 2.4 2.4 2.6 > 4K 4.3 4.3 4.5 > 8K 7.3 7.4 7.6 > 16K 9.2 9.6 11.1 > 32K 8.3 8.9 18.1 > 64K 8.3 8.9 25.4 > 128K 7.2 8.7 26.7 > 256K 7.7 8.4 24.9 > 512K 7.7 8.5 25.0 > > Thanks, > Stefano > > [1] https://www.spinics.net/lists/netdev/msg531783.html > [2] https://github.com/stefano-garzarella/iperf/ > > Stefano Garzarella (4): > vsock/virtio: reduce credit update messages > vhost/vsock: split packets to send using multiple buffers > vsock/virtio: change the maximum packet size allowed > vsock/virtio: increase RX buffer size to 64 KiB > > drivers/vhost/vsock.c | 35 ++++++++++++++++++++----- > include/linux/virtio_vsock.h | 3 ++- > net/vmw_vsock/virtio_transport_common.c | 18 +++++++++---- > 3 files changed, 44 insertions(+), 12 deletions(-) >
Stefan Hajnoczi
2019-Apr-08 09:44 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Mon, Apr 08, 2019 at 02:43:28PM +0800, Jason Wang wrote:> Another thing that may help is to implement sendpage(), which will greatly > improve the performance.I can't find documentation for ->sendpage(). Is the idea that you get a struct page for the payload and can do zero-copy tx? (And can userspace still write to the page, invalidating checksums in the header?) Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20190408/81e4ee6e/attachment.sig>
Stefano Garzarella
2019-Apr-09 09:13 UTC
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Mon, Apr 08, 2019 at 02:43:28PM +0800, Jason Wang wrote:> > On 2019/4/4 ??6:58, Stefano Garzarella wrote: > > This series tries to increase the throughput of virtio-vsock with slight > > changes: > > - patch 1/4: reduces the number of credit update messages sent to the > > transmitter > > - patch 2/4: allows the host to split packets on multiple buffers, > > in this way, we can remove the packet size limit to > > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE > > - patch 3/4: uses VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size > > allowed > > - patch 4/4: increases RX buffer size to 64 KiB (affects only host->guest) > > > > RFC: > > - maybe patch 4 can be replaced with multiple queues with different > > buffer sizes or using EWMA to adapt the buffer size to the traffic > > > Or EWMA + mergeable rx buffer, but if we decide to unify the datapath with > virtio-net, we can reuse their codes. > > > > > > - as Jason suggested in a previous thread [1] I'll evaluate to use > > virtio-net as transport, but I need to understand better how to > > interface with it, maybe introducing sk_buff in virtio-vsock. > > > > Any suggestions? > > > My understanding is this is not a must, but if it makes things easier, we > can do this.Hopefully it should simplify the maintainability and avoid duplicated code.> > Another thing that may help is to implement sendpage(), which will greatly > improve the performance.Thanks for your suggestions! I'll try to implement sendpage() in VSOCK to measure the improvement. Cheers, Stefano
Maybe Matching Threads
- [PATCH RFC 4/4] vsock/virtio: increase RX buffer size to 64 KiB
- [PATCH v2 7/8] vsock/virtio: increase RX buffer size to 64 KiB
- [PATCH v2 7/8] vsock/virtio: increase RX buffer size to 64 KiB
- [PATCH v2 7/8] vsock/virtio: increase RX buffer size to 64 KiB
- [PATCH v2 7/8] vsock/virtio: increase RX buffer size to 64 KiB