On 2021/2/2 ??6:17, Eugenio Perez Martin wrote:> On Tue, Feb 2, 2021 at 4:31 AM Jason Wang <jasowang at redhat.com>
wrote:
>>
>> On 2021/2/1 ??4:28, Eugenio Perez Martin wrote:
>>> On Mon, Feb 1, 2021 at 7:13 AM Jason Wang <jasowang at
redhat.com> wrote:
>>>> On 2021/1/30 ??4:54, Eugenio P?rez wrote:
>>>>> Signed-off-by: Eugenio P?rez <eperezma at redhat.com>
>>>>> ---
>>>>> include/hw/virtio/vhost.h | 1 +
>>>>> hw/virtio/vhost.c | 17 +++++++++++++++++
>>>>> 2 files changed, 18 insertions(+)
>>>>>
>>>>> diff --git a/include/hw/virtio/vhost.h
b/include/hw/virtio/vhost.h
>>>>> index 4a8bc75415..fca076e3f0 100644
>>>>> --- a/include/hw/virtio/vhost.h
>>>>> +++ b/include/hw/virtio/vhost.h
>>>>> @@ -123,6 +123,7 @@ uint64_t vhost_get_features(struct
vhost_dev *hdev, const int *feature_bits,
>>>>> void vhost_ack_features(struct vhost_dev *hdev, const
int *feature_bits,
>>>>> uint64_t features);
>>>>> bool vhost_has_free_slot(void);
>>>>> +struct vhost_dev *vhost_dev_from_virtio(const VirtIODevice
*vdev);
>>>>>
>>>>> int vhost_net_set_backend(struct vhost_dev *hdev,
>>>>> struct vhost_vring_file
*file);
>>>>> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
>>>>> index 28c7d78172..8683d507f5 100644
>>>>> --- a/hw/virtio/vhost.c
>>>>> +++ b/hw/virtio/vhost.c
>>>>> @@ -61,6 +61,23 @@ bool vhost_has_free_slot(void)
>>>>> return slots_limit > used_memslots;
>>>>> }
>>>>>
>>>>> +/*
>>>>> + * Get the vhost device associated to a VirtIO device.
>>>>> + */
>>>>> +struct vhost_dev *vhost_dev_from_virtio(const VirtIODevice
*vdev)
>>>>> +{
>>>>> + struct vhost_dev *hdev;
>>>>> +
>>>>> + QLIST_FOREACH(hdev, &vhost_devices, entry) {
>>>>> + if (hdev->vdev == vdev) {
>>>>> + return hdev;
>>>>> + }
>>>>> + }
>>>>> +
>>>>> + assert(hdev);
>>>>> + return NULL;
>>>>> +}
>>>> I'm not sure this can work in the case of multiqueue. E.g
vhost-net
>>>> multiqueue is a N:1 mapping between vhost devics and virtio
devices.
>>>>
>>>> Thanks
>>>>
>>> Right. We could add an "vdev vq index" parameter to the
function in
>>> this case, but I guess the most reliable way to do this is to add a
>>> vhost_opaque value to VirtQueue, as Stefan proposed in previous
RFC.
>>
>> So the question still, it looks like it's easier to hide the shadow
>> virtqueue stuffs at vhost layer instead of expose them to virtio layer:
>>
>> 1) vhost protocol is stable ABI
>> 2) no need to deal with virtio stuffs which is more complex than vhost
>>
>> Or are there any advantages if we do it at virtio layer?
>>
> As far as I can tell, we will need the virtio layer the moment we
> start copying/translating buffers.
>
> In this series, the virtio dependency can be reduced if qemu does not
> check the used ring _F_NO_NOTIFY flag before writing to irqfd. It
> would enable packed queues and IOMMU immediately, and I think the cost
> should not be so high. In the previous RFC this check was deleted
> later anyway, so I think it was a bad idea to include it from the start.
I am not sure I understand here. For vhost, we can still do anything we
want, e.g accessing guest memory etc. Any blocker that prevent us from
copying/translating buffers? (Note that qemu will propagate memory
mappings to vhost).
Thanks
>
>
>
>
>
>> Thanks
>>
>>
>>> I need to take this into account in qmp_x_vhost_enable_shadow_vq
too.
>>>
>>>>> +
>>>>> static void vhost_dev_sync_region(struct vhost_dev
*dev,
>>>>> MemoryRegionSection
*section,
>>>>> uint64_t mfirst,
uint64_t mlast,
>