On Sun, Apr 16, 2023 at 04:49:00AM +0000, Bobby Eshleman
wrote:>On Tue, May 02, 2023 at 04:14:18PM -0400, Stefan Hajnoczi wrote:
>> On Tue, May 02, 2023 at 10:44:04AM -0700, Cong Wang wrote:
>> > From: Cong Wang <cong.wang at bytedance.com>
>> >
>> > When virtqueue_add_sgs() fails, the skb is put back to send queue,
>> > we should not deliver the copy to tap device in this case. So we
>> > need to move virtio_transport_deliver_tap_pkt() down after all
>> > possible failures.
>> >
>> > Fixes: 82dfb540aeb2 ("VSOCK: Add virtio vsock vsockmon
hooks")
>> > Cc: Stefan Hajnoczi <stefanha at redhat.com>
>> > Cc: Stefano Garzarella <sgarzare at redhat.com>
>> > Cc: Bobby Eshleman <bobby.eshleman at bytedance.com>
>> > Signed-off-by: Cong Wang <cong.wang at bytedance.com>
>> > ---
>> > net/vmw_vsock/virtio_transport.c | 5 ++---
>> > 1 file changed, 2 insertions(+), 3 deletions(-)
>> >
>> > diff --git a/net/vmw_vsock/virtio_transport.c
b/net/vmw_vsock/virtio_transport.c
>> > index e95df847176b..055678628c07 100644
>> > --- a/net/vmw_vsock/virtio_transport.c
>> > +++ b/net/vmw_vsock/virtio_transport.c
>> > @@ -109,9 +109,6 @@ virtio_transport_send_pkt_work(struct
work_struct *work)
>> > if (!skb)
>> > break;
>> >
>> > - virtio_transport_deliver_tap_pkt(skb);
>> > - reply = virtio_vsock_skb_reply(skb);
>> > -
>> > sg_init_one(&hdr, virtio_vsock_hdr(skb),
sizeof(*virtio_vsock_hdr(skb)));
>> > sgs[out_sg++] = &hdr;
>> > if (skb->len > 0) {
>> > @@ -128,6 +125,8 @@ virtio_transport_send_pkt_work(struct
work_struct *work)
>> > break;
>> > }
>> >
>> > + virtio_transport_deliver_tap_pkt(skb);
I would move only the virtio_transport_deliver_tap_pkt(),
virtio_vsock_skb_reply() is not related.
>> > + reply = virtio_vsock_skb_reply(skb);
>>
>> I don't remember the reason for the ordering, but I'm pretty
sure it was
>> deliberate. Probably because the payload buffers could be freed as soon
>> as virtqueue_add_sgs() is called.
>>
>> If that's no longer true with Bobby's skbuff code, then maybe
it's safe
>> to monitor packets after they have been sent.
>>
>> Stefan
>
>Hey Stefan,
>
>Unfortunately, skbuff doesn't change that behavior.
>
>If I understand correctly, the problem flow you are describing
>would be something like this:
>
>Thread 0 Thread 1
>guest:virtqueue_add_sgs()[@send_pkt_work]
>
> host:vhost_vq_get_desc()[@handle_tx_kick]
> host:vhost_add_used()
> host:vhost_signal()
> guest:virtqueue_get_buf()[@tx_work]
> guest:consume_skb()
>
>guest:deliver_tap_pkt()[@send_pkt_work]
>^ use-after-free
>
>Which I guess is possible because the receiver can consume the new
>scatterlist during the processing kicked off for a previous batch?
>(doesn't have to wait for the subsequent kick)
This is true, but both `send_pkt_work` and `tx_work` hold `tx_lock`, so
can they really go in parallel?
Thanks,
Stefano