Stefano Garzarella
2019-Oct-11 13:40 UTC
[PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
On Sun, Sep 1, 2019 at 8:56 AM Michael S. Tsirkin <mst at redhat.com> wrote:> On Fri, Aug 30, 2019 at 11:40:59AM +0200, Stefano Garzarella wrote: > > On Mon, Jul 29, 2019 at 10:04:29AM -0400, Michael S. Tsirkin wrote: > > > On Wed, Jul 17, 2019 at 01:30:26PM +0200, Stefano Garzarella wrote: > > > > Since virtio-vsock was introduced, the buffers filled by the host > > > > and pushed to the guest using the vring, are directly queued in > > > > a per-socket list. These buffers are preallocated by the guest > > > > with a fixed size (4 KB). > > > > > > > > The maximum amount of memory used by each socket should be > > > > controlled by the credit mechanism. > > > > The default credit available per-socket is 256 KB, but if we use > > > > only 1 byte per packet, the guest can queue up to 262144 of 4 KB > > > > buffers, using up to 1 GB of memory per-socket. In addition, the > > > > guest will continue to fill the vring with new 4 KB free buffers > > > > to avoid starvation of other sockets. > > > > > > > > This patch mitigates this issue copying the payload of small > > > > packets (< 128 bytes) into the buffer of last packet queued, in > > > > order to avoid wasting memory. > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha at redhat.com> > > > > Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> > > > > > > This is good enough for net-next, but for net I think we > > > should figure out how to address the issue completely. > > > Can we make the accounting precise? What happens to > > > performance if we do? > > > > > > > Since I'm back from holidays, I'm restarting this thread to figure out > > how to address the issue completely. > > > > I did a better analysis of the credit mechanism that we implemented in > > virtio-vsock to get a clearer view and I'd share it with you: > > > > This issue affect only the "host->guest" path. In this case, when the > > host wants to send a packet to the guest, it uses a "free" buffer > > allocated by the guest (4KB). > > The "free" buffers available for the host are shared between all > > sockets, instead, the credit mechanism is per-socket, I think to > > avoid the starvation of others sockets. > > The guests re-fill the "free" queue when the available buffers are > > less than half. > > > > Each peer have these variables in the per-socket state: > > /* local vars */ > > buf_alloc /* max bytes usable by this socket > > [exposed to the other peer] */ > > fwd_cnt /* increased when RX packet is consumed by the > > user space [exposed to the other peer] */ > > tx_cnt /* increased when TX packet is sent to the other peer */ > > > > /* remote vars */ > > peer_buf_alloc /* peer's buf_alloc */ > > peer_fwd_cnt /* peer's fwd_cnt */ > > > > When a peer sends a packet, it increases the 'tx_cnt'; when the > > receiver consumes the packet (copy it to the user-space buffer), it > > increases the 'fwd_cnt'. > > Note: increments are made considering the payload length and not the > > buffer length. > > > > The value of 'buf_alloc' and 'fwd_cnt' are sent to the other peer in > > all packet headers or with an explicit CREDIT_UPDATE packet. > > > > The local 'buf_alloc' value can be modified by the user space using > > setsockopt() with optname=SO_VM_SOCKETS_BUFFER_SIZE. > > > > Before to send a packet, the peer checks the space available: > > credit_available = peer_buf_alloc - (tx_cnt - peer_fwd_cnt) > > and it will send up to credit_available bytes to the other peer. > > > > Possible solutions considering Michael's advice: > > 1. Use the buffer length instead of the payload length when we increment > > the counters: > > - This approach will account precisely the memory used per socket. > > - This requires changes in both guest and host. > > - It is not compatible with old drivers, so a feature should be negotiated. > > 2. Decrease the advertised 'buf_alloc' taking count of bytes queued in > > the socket queue but not used. (e.g. 256 byte used on 4K available in > > the buffer) > > - pkt->hdr.buf_alloc = buf_alloc - bytes_not_used. > > - This should be compatible also with old drivers. > > > > Maybe the second is less invasive, but will it be too tricky? > > Any other advice or suggestions? > > > > Thanks in advance, > > Stefano > > OK let me try to clarify. The idea is this: > > Let's say we queue a buffer of 4K, and we copy if len < 128 bytes. This > means that in the worst case (128 byte packets), each byte of credit in > the socket uses up 4K/128 = 16 bytes of kernel memory. In fact we need > to also account for the virtio_vsock_pkt since I think it's kept around > until userspace consumes it. > > Thus given X buf alloc allowed in the socket, we should publish X/16 > credits to the other side. This will ensure the other side does not send > more than X/16 bytes for a given socket and thus we won't need to > allocate more than X bytes to hold the data. > > We can play with the copy break value to tweak this. >Hi Michael, sorry for the long silence, but I focused on multi-transport. Before to implement your idea, I tried to do some calculations and looking better to our credit mechanism: buf_alloc = 256 KB (default, tunable through setsockopt) sizeof(struct virtio_vsock_pkt) = 128 - guest (we use preallocated 4 KB buffers to receive packets, copying small packet - < 128 -) worst_case = 129 buf_size = 4 KB credit2mem = (buf_size + sizeof(struct virtio_vsock_pkt)) / worst_case = 32 credit_published = buf_alloc / credit2mem = ~8 KB Space for just 2 full packet (4 KB) - host (we copy packets from the vring, allocating the space for the payload) worst_case = 1 buf_size = 1 credit2mem = (buf_size + sizeof(struct virtio_vsock_pkt)) / worst_case = 129 credit_published = buf_alloc / credit2mem = ~2 KB Less than a full packet (guest now can send up to 64 KB with a single packet, so it will be limited to 2 KB) Current memory consumption in the worst case if the RX queue is full: - guest mem = (buf_alloc / worst_case) * (buf_size + sizeof(struct virtio_vsock_pkt) = ~8MB - host mem = (buf_alloc / worst_case) * (buf_size + sizeof(struct virtio_vsock_pkt) = ~32MB I think that the performance with big packets will be affected, but I still have to try. Another approach that I want to explore is to play with buf_alloc published to the peer. One thing that's not clear to me yet is the meaning of SO_VM_SOCKETS_BUFFER_SIZE: - max amount of memory used in the RX queue - max amount of payload bytes in the RX queue (without overhead of struct virtio_vsock_pkt + preallocated buffer)>From the 'include/uapi/linux/vm_sockets.h':/* Option name for STREAM socket buffer size. Use as the option name in * setsockopt(3) or getsockopt(3) to set or get an unsigned long long that * specifies the size of the buffer underlying a vSockets STREAM socket. * Value is clamped to the MIN and MAX. */ #define SO_VM_SOCKETS_BUFFER_SIZE 0 Regardless, I think we need to limit memory consumption in some way. I'll check the implementation of other transports, to understand better. I'll keep you updated! Thanks, Stefano
Michael S. Tsirkin
2019-Oct-11 14:11 UTC
[PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
On Fri, Oct 11, 2019 at 03:40:48PM +0200, Stefano Garzarella wrote:> On Sun, Sep 1, 2019 at 8:56 AM Michael S. Tsirkin <mst at redhat.com> wrote: > > On Fri, Aug 30, 2019 at 11:40:59AM +0200, Stefano Garzarella wrote: > > > On Mon, Jul 29, 2019 at 10:04:29AM -0400, Michael S. Tsirkin wrote: > > > > On Wed, Jul 17, 2019 at 01:30:26PM +0200, Stefano Garzarella wrote: > > > > > Since virtio-vsock was introduced, the buffers filled by the host > > > > > and pushed to the guest using the vring, are directly queued in > > > > > a per-socket list. These buffers are preallocated by the guest > > > > > with a fixed size (4 KB). > > > > > > > > > > The maximum amount of memory used by each socket should be > > > > > controlled by the credit mechanism. > > > > > The default credit available per-socket is 256 KB, but if we use > > > > > only 1 byte per packet, the guest can queue up to 262144 of 4 KB > > > > > buffers, using up to 1 GB of memory per-socket. In addition, the > > > > > guest will continue to fill the vring with new 4 KB free buffers > > > > > to avoid starvation of other sockets. > > > > > > > > > > This patch mitigates this issue copying the payload of small > > > > > packets (< 128 bytes) into the buffer of last packet queued, in > > > > > order to avoid wasting memory. > > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha at redhat.com> > > > > > Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> > > > > > > > > This is good enough for net-next, but for net I think we > > > > should figure out how to address the issue completely. > > > > Can we make the accounting precise? What happens to > > > > performance if we do? > > > > > > > > > > Since I'm back from holidays, I'm restarting this thread to figure out > > > how to address the issue completely. > > > > > > I did a better analysis of the credit mechanism that we implemented in > > > virtio-vsock to get a clearer view and I'd share it with you: > > > > > > This issue affect only the "host->guest" path. In this case, when the > > > host wants to send a packet to the guest, it uses a "free" buffer > > > allocated by the guest (4KB). > > > The "free" buffers available for the host are shared between all > > > sockets, instead, the credit mechanism is per-socket, I think to > > > avoid the starvation of others sockets. > > > The guests re-fill the "free" queue when the available buffers are > > > less than half. > > > > > > Each peer have these variables in the per-socket state: > > > /* local vars */ > > > buf_alloc /* max bytes usable by this socket > > > [exposed to the other peer] */ > > > fwd_cnt /* increased when RX packet is consumed by the > > > user space [exposed to the other peer] */ > > > tx_cnt /* increased when TX packet is sent to the other peer */ > > > > > > /* remote vars */ > > > peer_buf_alloc /* peer's buf_alloc */ > > > peer_fwd_cnt /* peer's fwd_cnt */ > > > > > > When a peer sends a packet, it increases the 'tx_cnt'; when the > > > receiver consumes the packet (copy it to the user-space buffer), it > > > increases the 'fwd_cnt'. > > > Note: increments are made considering the payload length and not the > > > buffer length. > > > > > > The value of 'buf_alloc' and 'fwd_cnt' are sent to the other peer in > > > all packet headers or with an explicit CREDIT_UPDATE packet. > > > > > > The local 'buf_alloc' value can be modified by the user space using > > > setsockopt() with optname=SO_VM_SOCKETS_BUFFER_SIZE. > > > > > > Before to send a packet, the peer checks the space available: > > > credit_available = peer_buf_alloc - (tx_cnt - peer_fwd_cnt) > > > and it will send up to credit_available bytes to the other peer. > > > > > > Possible solutions considering Michael's advice: > > > 1. Use the buffer length instead of the payload length when we increment > > > the counters: > > > - This approach will account precisely the memory used per socket. > > > - This requires changes in both guest and host. > > > - It is not compatible with old drivers, so a feature should be negotiated. > > > 2. Decrease the advertised 'buf_alloc' taking count of bytes queued in > > > the socket queue but not used. (e.g. 256 byte used on 4K available in > > > the buffer) > > > - pkt->hdr.buf_alloc = buf_alloc - bytes_not_used. > > > - This should be compatible also with old drivers. > > > > > > Maybe the second is less invasive, but will it be too tricky? > > > Any other advice or suggestions? > > > > > > Thanks in advance, > > > Stefano > > > > OK let me try to clarify. The idea is this: > > > > Let's say we queue a buffer of 4K, and we copy if len < 128 bytes. This > > means that in the worst case (128 byte packets), each byte of credit in > > the socket uses up 4K/128 = 16 bytes of kernel memory. In fact we need > > to also account for the virtio_vsock_pkt since I think it's kept around > > until userspace consumes it. > > > > Thus given X buf alloc allowed in the socket, we should publish X/16 > > credits to the other side. This will ensure the other side does not send > > more than X/16 bytes for a given socket and thus we won't need to > > allocate more than X bytes to hold the data. > > > > We can play with the copy break value to tweak this. > > > > Hi Michael, > sorry for the long silence, but I focused on multi-transport. > > Before to implement your idea, I tried to do some calculations and > looking better to our credit mechanism: > > buf_alloc = 256 KB (default, tunable through setsockopt) > sizeof(struct virtio_vsock_pkt) = 128 > > - guest (we use preallocated 4 KB buffers to receive packets, copying > small packet - < 128 -) > worst_case = 129 > buf_size = 4 KB > credit2mem = (buf_size + sizeof(struct virtio_vsock_pkt)) / worst_case = 32Making virtio_vsock_pkt smaller will help btw. E.g. why keep a work struct per packet? do we need the header there? etc etc.> > credit_published = buf_alloc / credit2mem = ~8 KB > Space for just 2 full packet (4 KB) > > - host (we copy packets from the vring, allocating the space for the payload) > worst_case = 1 > buf_size = 1 > credit2mem = (buf_size + sizeof(struct virtio_vsock_pkt)) / worst_case = 129 > > credit_published = buf_alloc / credit2mem = ~2 KB > Less than a full packet (guest now can send up to 64 KB with a single > packet, so it will be limited to 2 KB)1 byte in struct virtio_vsock_pkt is crazy. Can't we copy like we do in the guest? E.g. allocate at least 128 bytes, and then we can add data to the tail of the last packet?> Current memory consumption in the worst case if the RX queue is full: > - guest > mem = (buf_alloc / worst_case) * > (buf_size + sizeof(struct virtio_vsock_pkt) = ~8MB > > - host > mem = (buf_alloc / worst_case) * > (buf_size + sizeof(struct virtio_vsock_pkt) = ~32MB > > I think that the performance with big packets will be affected, > but I still have to try. > > Another approach that I want to explore is to play with buf_alloc > published to the peer. > > One thing that's not clear to me yet is the meaning of > SO_VM_SOCKETS_BUFFER_SIZE: > - max amount of memory used in the RX queue > - max amount of payload bytes in the RX queue (without overhead of > struct virtio_vsock_pkt + preallocated buffer) > > >From the 'include/uapi/linux/vm_sockets.h': > /* Option name for STREAM socket buffer size. Use as the option name in > * setsockopt(3) or getsockopt(3) to set or get an unsigned long long that > * specifies the size of the buffer underlying a vSockets STREAM socket. > * Value is clamped to the MIN and MAX. > */ > > #define SO_VM_SOCKETS_BUFFER_SIZE 0 > > Regardless, I think we need to limit memory consumption in some way. > I'll check the implementation of other transports, to understand better. > > I'll keep you updated! > > Thanks, > Stefano
Stefano Garzarella
2019-Oct-11 14:23 UTC
[PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
On Fri, Oct 11, 2019 at 10:11:19AM -0400, Michael S. Tsirkin wrote:> On Fri, Oct 11, 2019 at 03:40:48PM +0200, Stefano Garzarella wrote: > > On Sun, Sep 1, 2019 at 8:56 AM Michael S. Tsirkin <mst at redhat.com> wrote: > > > On Fri, Aug 30, 2019 at 11:40:59AM +0200, Stefano Garzarella wrote: > > > > On Mon, Jul 29, 2019 at 10:04:29AM -0400, Michael S. Tsirkin wrote: > > > > > On Wed, Jul 17, 2019 at 01:30:26PM +0200, Stefano Garzarella wrote: > > > > > > Since virtio-vsock was introduced, the buffers filled by the host > > > > > > and pushed to the guest using the vring, are directly queued in > > > > > > a per-socket list. These buffers are preallocated by the guest > > > > > > with a fixed size (4 KB). > > > > > > > > > > > > The maximum amount of memory used by each socket should be > > > > > > controlled by the credit mechanism. > > > > > > The default credit available per-socket is 256 KB, but if we use > > > > > > only 1 byte per packet, the guest can queue up to 262144 of 4 KB > > > > > > buffers, using up to 1 GB of memory per-socket. In addition, the > > > > > > guest will continue to fill the vring with new 4 KB free buffers > > > > > > to avoid starvation of other sockets. > > > > > > > > > > > > This patch mitigates this issue copying the payload of small > > > > > > packets (< 128 bytes) into the buffer of last packet queued, in > > > > > > order to avoid wasting memory. > > > > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha at redhat.com> > > > > > > Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> > > > > > > > > > > This is good enough for net-next, but for net I think we > > > > > should figure out how to address the issue completely. > > > > > Can we make the accounting precise? What happens to > > > > > performance if we do? > > > > > > > > > > > > > Since I'm back from holidays, I'm restarting this thread to figure out > > > > how to address the issue completely. > > > > > > > > I did a better analysis of the credit mechanism that we implemented in > > > > virtio-vsock to get a clearer view and I'd share it with you: > > > > > > > > This issue affect only the "host->guest" path. In this case, when the > > > > host wants to send a packet to the guest, it uses a "free" buffer > > > > allocated by the guest (4KB). > > > > The "free" buffers available for the host are shared between all > > > > sockets, instead, the credit mechanism is per-socket, I think to > > > > avoid the starvation of others sockets. > > > > The guests re-fill the "free" queue when the available buffers are > > > > less than half. > > > > > > > > Each peer have these variables in the per-socket state: > > > > /* local vars */ > > > > buf_alloc /* max bytes usable by this socket > > > > [exposed to the other peer] */ > > > > fwd_cnt /* increased when RX packet is consumed by the > > > > user space [exposed to the other peer] */ > > > > tx_cnt /* increased when TX packet is sent to the other peer */ > > > > > > > > /* remote vars */ > > > > peer_buf_alloc /* peer's buf_alloc */ > > > > peer_fwd_cnt /* peer's fwd_cnt */ > > > > > > > > When a peer sends a packet, it increases the 'tx_cnt'; when the > > > > receiver consumes the packet (copy it to the user-space buffer), it > > > > increases the 'fwd_cnt'. > > > > Note: increments are made considering the payload length and not the > > > > buffer length. > > > > > > > > The value of 'buf_alloc' and 'fwd_cnt' are sent to the other peer in > > > > all packet headers or with an explicit CREDIT_UPDATE packet. > > > > > > > > The local 'buf_alloc' value can be modified by the user space using > > > > setsockopt() with optname=SO_VM_SOCKETS_BUFFER_SIZE. > > > > > > > > Before to send a packet, the peer checks the space available: > > > > credit_available = peer_buf_alloc - (tx_cnt - peer_fwd_cnt) > > > > and it will send up to credit_available bytes to the other peer. > > > > > > > > Possible solutions considering Michael's advice: > > > > 1. Use the buffer length instead of the payload length when we increment > > > > the counters: > > > > - This approach will account precisely the memory used per socket. > > > > - This requires changes in both guest and host. > > > > - It is not compatible with old drivers, so a feature should be negotiated. > > > > 2. Decrease the advertised 'buf_alloc' taking count of bytes queued in > > > > the socket queue but not used. (e.g. 256 byte used on 4K available in > > > > the buffer) > > > > - pkt->hdr.buf_alloc = buf_alloc - bytes_not_used. > > > > - This should be compatible also with old drivers. > > > > > > > > Maybe the second is less invasive, but will it be too tricky? > > > > Any other advice or suggestions? > > > > > > > > Thanks in advance, > > > > Stefano > > > > > > OK let me try to clarify. The idea is this: > > > > > > Let's say we queue a buffer of 4K, and we copy if len < 128 bytes. This > > > means that in the worst case (128 byte packets), each byte of credit in > > > the socket uses up 4K/128 = 16 bytes of kernel memory. In fact we need > > > to also account for the virtio_vsock_pkt since I think it's kept around > > > until userspace consumes it. > > > > > > Thus given X buf alloc allowed in the socket, we should publish X/16 > > > credits to the other side. This will ensure the other side does not send > > > more than X/16 bytes for a given socket and thus we won't need to > > > allocate more than X bytes to hold the data. > > > > > > We can play with the copy break value to tweak this. > > > > > > > Hi Michael, > > sorry for the long silence, but I focused on multi-transport. > > > > Before to implement your idea, I tried to do some calculations and > > looking better to our credit mechanism: > > > > buf_alloc = 256 KB (default, tunable through setsockopt) > > sizeof(struct virtio_vsock_pkt) = 128 > > > > - guest (we use preallocated 4 KB buffers to receive packets, copying > > small packet - < 128 -) > > worst_case = 129 > > buf_size = 4 KB > > credit2mem = (buf_size + sizeof(struct virtio_vsock_pkt)) / worst_case = 32 > > Making virtio_vsock_pkt smaller will help btw. > E.g. why keep a work struct per packet? do we need the header there? etc > etc. >Yes, I had the same feeling, I'll try to make it smaller!> > > > credit_published = buf_alloc / credit2mem = ~8 KB > > Space for just 2 full packet (4 KB) > > > > - host (we copy packets from the vring, allocating the space for the payload) > > worst_case = 1 > > buf_size = 1 > > credit2mem = (buf_size + sizeof(struct virtio_vsock_pkt)) / worst_case = 129 > > > > credit_published = buf_alloc / credit2mem = ~2 KB > > Less than a full packet (guest now can send up to 64 KB with a single > > packet, so it will be limited to 2 KB) > > 1 byte in struct virtio_vsock_pkt is crazy. Can't we copyI know :-)> like we do in the guest? E.g. allocate at least 128 bytes, > and then we can add data to the tail of the last packet?Yes, I've been thinking about trying that out. (allocate at least 128 bytes and do as we do in the guest). Thanks, Stefano
Stefan Hajnoczi
2019-Oct-14 08:17 UTC
[PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
On Fri, Oct 11, 2019 at 03:40:48PM +0200, Stefano Garzarella wrote:> On Sun, Sep 1, 2019 at 8:56 AM Michael S. Tsirkin <mst at redhat.com> wrote: > > On Fri, Aug 30, 2019 at 11:40:59AM +0200, Stefano Garzarella wrote: > > > On Mon, Jul 29, 2019 at 10:04:29AM -0400, Michael S. Tsirkin wrote: > > > > On Wed, Jul 17, 2019 at 01:30:26PM +0200, Stefano Garzarella wrote: > > > > > Since virtio-vsock was introduced, the buffers filled by the host > > > > > and pushed to the guest using the vring, are directly queued in > > > > > a per-socket list. These buffers are preallocated by the guest > > > > > with a fixed size (4 KB). > > > > > > > > > > The maximum amount of memory used by each socket should be > > > > > controlled by the credit mechanism. > > > > > The default credit available per-socket is 256 KB, but if we use > > > > > only 1 byte per packet, the guest can queue up to 262144 of 4 KB > > > > > buffers, using up to 1 GB of memory per-socket. In addition, the > > > > > guest will continue to fill the vring with new 4 KB free buffers > > > > > to avoid starvation of other sockets. > > > > > > > > > > This patch mitigates this issue copying the payload of small > > > > > packets (< 128 bytes) into the buffer of last packet queued, in > > > > > order to avoid wasting memory. > > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha at redhat.com> > > > > > Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> > > > > > > > > This is good enough for net-next, but for net I think we > > > > should figure out how to address the issue completely. > > > > Can we make the accounting precise? What happens to > > > > performance if we do? > > > > > > > > > > Since I'm back from holidays, I'm restarting this thread to figure out > > > how to address the issue completely. > > > > > > I did a better analysis of the credit mechanism that we implemented in > > > virtio-vsock to get a clearer view and I'd share it with you: > > > > > > This issue affect only the "host->guest" path. In this case, when the > > > host wants to send a packet to the guest, it uses a "free" buffer > > > allocated by the guest (4KB). > > > The "free" buffers available for the host are shared between all > > > sockets, instead, the credit mechanism is per-socket, I think to > > > avoid the starvation of others sockets. > > > The guests re-fill the "free" queue when the available buffers are > > > less than half. > > > > > > Each peer have these variables in the per-socket state: > > > /* local vars */ > > > buf_alloc /* max bytes usable by this socket > > > [exposed to the other peer] */ > > > fwd_cnt /* increased when RX packet is consumed by the > > > user space [exposed to the other peer] */ > > > tx_cnt /* increased when TX packet is sent to the other peer */ > > > > > > /* remote vars */ > > > peer_buf_alloc /* peer's buf_alloc */ > > > peer_fwd_cnt /* peer's fwd_cnt */ > > > > > > When a peer sends a packet, it increases the 'tx_cnt'; when the > > > receiver consumes the packet (copy it to the user-space buffer), it > > > increases the 'fwd_cnt'. > > > Note: increments are made considering the payload length and not the > > > buffer length. > > > > > > The value of 'buf_alloc' and 'fwd_cnt' are sent to the other peer in > > > all packet headers or with an explicit CREDIT_UPDATE packet. > > > > > > The local 'buf_alloc' value can be modified by the user space using > > > setsockopt() with optname=SO_VM_SOCKETS_BUFFER_SIZE. > > > > > > Before to send a packet, the peer checks the space available: > > > credit_available = peer_buf_alloc - (tx_cnt - peer_fwd_cnt) > > > and it will send up to credit_available bytes to the other peer. > > > > > > Possible solutions considering Michael's advice: > > > 1. Use the buffer length instead of the payload length when we increment > > > the counters: > > > - This approach will account precisely the memory used per socket. > > > - This requires changes in both guest and host. > > > - It is not compatible with old drivers, so a feature should be negotiated. > > > 2. Decrease the advertised 'buf_alloc' taking count of bytes queued in > > > the socket queue but not used. (e.g. 256 byte used on 4K available in > > > the buffer) > > > - pkt->hdr.buf_alloc = buf_alloc - bytes_not_used. > > > - This should be compatible also with old drivers. > > > > > > Maybe the second is less invasive, but will it be too tricky? > > > Any other advice or suggestions? > > > > > > Thanks in advance, > > > Stefano > > > > OK let me try to clarify. The idea is this: > > > > Let's say we queue a buffer of 4K, and we copy if len < 128 bytes. This > > means that in the worst case (128 byte packets), each byte of credit in > > the socket uses up 4K/128 = 16 bytes of kernel memory. In fact we need > > to also account for the virtio_vsock_pkt since I think it's kept around > > until userspace consumes it. > > > > Thus given X buf alloc allowed in the socket, we should publish X/16 > > credits to the other side. This will ensure the other side does not send > > more than X/16 bytes for a given socket and thus we won't need to > > allocate more than X bytes to hold the data. > > > > We can play with the copy break value to tweak this. > > > > Hi Michael, > sorry for the long silence, but I focused on multi-transport. > > Before to implement your idea, I tried to do some calculations and > looking better to our credit mechanism: > > buf_alloc = 256 KB (default, tunable through setsockopt) > sizeof(struct virtio_vsock_pkt) = 128 > > - guest (we use preallocated 4 KB buffers to receive packets, copying > small packet - < 128 -) > worst_case = 129 > buf_size = 4 KB > credit2mem = (buf_size + sizeof(struct virtio_vsock_pkt)) / worst_case = 32 > > credit_published = buf_alloc / credit2mem = ~8 KB > Space for just 2 full packet (4 KB) > > - host (we copy packets from the vring, allocating the space for the payload) > worst_case = 1 > buf_size = 1 > credit2mem = (buf_size + sizeof(struct virtio_vsock_pkt)) / worst_case = 129 > > credit_published = buf_alloc / credit2mem = ~2 KB > Less than a full packet (guest now can send up to 64 KB with a single > packet, so it will be limited to 2 KB) > > Current memory consumption in the worst case if the RX queue is full: > - guest > mem = (buf_alloc / worst_case) * > (buf_size + sizeof(struct virtio_vsock_pkt) = ~8MB > > - host > mem = (buf_alloc / worst_case) * > (buf_size + sizeof(struct virtio_vsock_pkt) = ~32MB > > I think that the performance with big packets will be affected, > but I still have to try. > > Another approach that I want to explore is to play with buf_alloc > published to the peer. > > One thing that's not clear to me yet is the meaning of > SO_VM_SOCKETS_BUFFER_SIZE: > - max amount of memory used in the RX queue > - max amount of payload bytes in the RX queue (without overhead of > struct virtio_vsock_pkt + preallocated buffer) > > From the 'include/uapi/linux/vm_sockets.h': > /* Option name for STREAM socket buffer size. Use as the option name in > * setsockopt(3) or getsockopt(3) to set or get an unsigned long long that > * specifies the size of the buffer underlying a vSockets STREAM socket. > * Value is clamped to the MIN and MAX. > */ > > #define SO_VM_SOCKETS_BUFFER_SIZE 0 > > Regardless, I think we need to limit memory consumption in some way. > I'll check the implementation of other transports, to understand better.SO_VM_SOCKETS_BUFFER_SIZE might have been useful for VMCI-specific applications, but we should use SO_RCVBUF and SO_SNDBUF for portable applications in the future. Those socket options also work with other address families. I guess these sockopts are bypassed by AF_VSOCK because it doesn't use the common skb queuing code in net/core/sock.c :(. But one day we might migrate to it... Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20191014/20a74477/attachment-0001.sig>
Jason Wang
2019-Oct-14 08:21 UTC
[PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
On 2019/10/14 ??4:17, Stefan Hajnoczi wrote:> SO_VM_SOCKETS_BUFFER_SIZE might have been useful for VMCI-specific > applications, but we should use SO_RCVBUF and SO_SNDBUF for portable > applications in the future. Those socket options also work with other > address families. > > I guess these sockopts are bypassed by AF_VSOCK because it doesn't use > the common skb queuing code in net/core/sock.c:(. But one day we might > migrate to it... > > Stefan+1, we should really consider to reuse the exist socket mechanism instead of re-inventing wheels. Thanks
Reasonably Related Threads
- [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
- [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
- [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
- [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
- [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket