Displaying 20 results from an estimated 24 matches for "set_mem_table".
2019 Jul 03
2
[RFC v2] vhost: introduce mdev based hardware vhost backend
...device interrupts.
> > IRQ-bypass can also be supported.
> >
> > Currently, the data path interrupt can be configured via the
> > VFIO_VHOST_VQ_IRQ_INDEX with virtqueue's callfd.
>
>
> How about DMA API? Do you expect to use VFIO IOMMU API or using vhost
> SET_MEM_TABLE? VFIO IOMMU API is more generic for sure but with
> SET_MEM_TABLE DMA can be done at the level of parent device which means it
> can work for e.g the card with on-chip IOMMU.
Agree. In this RFC, it assumes userspace will use VFIO IOMMU API
to do the DMA programming. But like what you said, t...
2019 Jul 03
2
[RFC v2] vhost: introduce mdev based hardware vhost backend
...device interrupts.
> > IRQ-bypass can also be supported.
> >
> > Currently, the data path interrupt can be configured via the
> > VFIO_VHOST_VQ_IRQ_INDEX with virtqueue's callfd.
>
>
> How about DMA API? Do you expect to use VFIO IOMMU API or using vhost
> SET_MEM_TABLE? VFIO IOMMU API is more generic for sure but with
> SET_MEM_TABLE DMA can be done at the level of parent device which means it
> can work for e.g the card with on-chip IOMMU.
Agree. In this RFC, it assumes userspace will use VFIO IOMMU API
to do the DMA programming. But like what you said, t...
2019 Jul 03
0
[RFC v2] vhost: introduce mdev based hardware vhost backend
...s.
>>> IRQ-bypass can also be supported.
>>>
>>> Currently, the data path interrupt can be configured via the
>>> VFIO_VHOST_VQ_IRQ_INDEX with virtqueue's callfd.
>>
>> How about DMA API? Do you expect to use VFIO IOMMU API or using vhost
>> SET_MEM_TABLE? VFIO IOMMU API is more generic for sure but with
>> SET_MEM_TABLE DMA can be done at the level of parent device which means it
>> can work for e.g the card with on-chip IOMMU.
> Agree. In this RFC, it assumes userspace will use VFIO IOMMU API
> to do the DMA programming. But like...
2018 Dec 25
2
[PATCH net V2 4/4] vhost: log dirty page correctly
...century we are trying defence in depth approach.
> >
> > My point is that a single code path that is responsible for
> > the HVA translations is better than two.
> >
>
> So the difference whether or not use memory table information:
>
> Current:
>
> 1) SET_MEM_TABLE: GPA->HVA
>
> 2) Qemu GIOVA->GPA
>
> 3) Qemu GPA->HVA
>
> 4) IOTLB_UPDATE: GIOVA->HVA
>
> If I understand correctly you want to drop step 3 consider it might be buggy
> which is just 19 lines of code in qemu (vhost_memory_region_lookup()). This
> will e...
2018 Dec 25
2
[PATCH net V2 4/4] vhost: log dirty page correctly
...century we are trying defence in depth approach.
> >
> > My point is that a single code path that is responsible for
> > the HVA translations is better than two.
> >
>
> So the difference whether or not use memory table information:
>
> Current:
>
> 1) SET_MEM_TABLE: GPA->HVA
>
> 2) Qemu GIOVA->GPA
>
> 3) Qemu GPA->HVA
>
> 4) IOTLB_UPDATE: GIOVA->HVA
>
> If I understand correctly you want to drop step 3 consider it might be buggy
> which is just 19 lines of code in qemu (vhost_memory_region_lookup()). This
> will e...
2018 Dec 26
1
[PATCH net V2 4/4] vhost: log dirty page correctly
...> My point is that a single code path that is responsible for
> > > > the HVA translations is better than two.
> > > >
> > > So the difference whether or not use memory table information:
> > >
> > > Current:
> > >
> > > 1) SET_MEM_TABLE: GPA->HVA
> > >
> > > 2) Qemu GIOVA->GPA
> > >
> > > 3) Qemu GPA->HVA
> > >
> > > 4) IOTLB_UPDATE: GIOVA->HVA
> > >
> > > If I understand correctly you want to drop step 3 consider it might be buggy
> > &g...
2018 Dec 26
0
[PATCH net V2 4/4] vhost: log dirty page correctly
...defence in depth approach.
>>>
>>> My point is that a single code path that is responsible for
>>> the HVA translations is better than two.
>>>
>> So the difference whether or not use memory table information:
>>
>> Current:
>>
>> 1) SET_MEM_TABLE: GPA->HVA
>>
>> 2) Qemu GIOVA->GPA
>>
>> 3) Qemu GPA->HVA
>>
>> 4) IOTLB_UPDATE: GIOVA->HVA
>>
>> If I understand correctly you want to drop step 3 consider it might be buggy
>> which is just 19 lines of code in qemu (vhost_memory_re...
2019 Jul 03
2
[RFC v2] vhost: introduce mdev based hardware vhost backend
...pported.
> > > >
> > > > Currently, the data path interrupt can be configured via the
> > > > VFIO_VHOST_VQ_IRQ_INDEX with virtqueue's callfd.
> > >
> > > How about DMA API? Do you expect to use VFIO IOMMU API or using vhost
> > > SET_MEM_TABLE? VFIO IOMMU API is more generic for sure but with
> > > SET_MEM_TABLE DMA can be done at the level of parent device which means it
> > > can work for e.g the card with on-chip IOMMU.
> > Agree. In this RFC, it assumes userspace will use VFIO IOMMU API
> > to do the DMA...
2019 Jul 03
2
[RFC v2] vhost: introduce mdev based hardware vhost backend
...pported.
> > > >
> > > > Currently, the data path interrupt can be configured via the
> > > > VFIO_VHOST_VQ_IRQ_INDEX with virtqueue's callfd.
> > >
> > > How about DMA API? Do you expect to use VFIO IOMMU API or using vhost
> > > SET_MEM_TABLE? VFIO IOMMU API is more generic for sure but with
> > > SET_MEM_TABLE DMA can be done at the level of parent device which means it
> > > can work for e.g the card with on-chip IOMMU.
> > Agree. In this RFC, it assumes userspace will use VFIO IOMMU API
> > to do the DMA...
2018 Dec 24
2
[PATCH net V2 4/4] vhost: log dirty page correctly
On Mon, Dec 24, 2018 at 11:43:31AM +0800, Jason Wang wrote:
>
> On 2018/12/14 ??9:20, Michael S. Tsirkin wrote:
> > On Fri, Dec 14, 2018 at 10:43:03AM +0800, Jason Wang wrote:
> > > On 2018/12/13 ??10:31, Michael S. Tsirkin wrote:
> > > > > Just to make sure I understand this. It looks to me we should:
> > > > >
> > > > > - allow
2018 Dec 24
2
[PATCH net V2 4/4] vhost: log dirty page correctly
On Mon, Dec 24, 2018 at 11:43:31AM +0800, Jason Wang wrote:
>
> On 2018/12/14 ??9:20, Michael S. Tsirkin wrote:
> > On Fri, Dec 14, 2018 at 10:43:03AM +0800, Jason Wang wrote:
> > > On 2018/12/13 ??10:31, Michael S. Tsirkin wrote:
> > > > > Just to make sure I understand this. It looks to me we should:
> > > > >
> > > > > - allow
2018 Dec 25
0
[PATCH net V2 4/4] vhost: log dirty page correctly
...n not to work in the 20th century.
> In the 21st century we are trying defence in depth approach.
>
> My point is that a single code path that is responsible for
> the HVA translations is better than two.
>
So the difference whether or not use memory table information:
Current:
1) SET_MEM_TABLE: GPA->HVA
2) Qemu GIOVA->GPA
3) Qemu GPA->HVA
4) IOTLB_UPDATE: GIOVA->HVA
If I understand correctly you want to drop step 3 consider it might be
buggy which is just 19 lines of code in qemu
(vhost_memory_region_lookup()). This will ends up:
1) Do GPA->HVA translation in IOTLB_...
2019 Jul 04
2
[RFC v2] vhost: introduce mdev based hardware vhost backend
On Thu, Jul 04, 2019 at 02:35:20PM +0800, Jason Wang wrote:
> On 2019/7/4 ??2:21, Tiwei Bie wrote:
> > On Thu, Jul 04, 2019 at 12:31:48PM +0800, Jason Wang wrote:
> > > On 2019/7/3 ??9:08, Tiwei Bie wrote:
> > > > On Wed, Jul 03, 2019 at 08:16:23PM +0800, Jason Wang wrote:
> > > > > On 2019/7/3 ??7:52, Tiwei Bie wrote:
> > > > > > On
2019 Jul 04
2
[RFC v2] vhost: introduce mdev based hardware vhost backend
On Thu, Jul 04, 2019 at 02:35:20PM +0800, Jason Wang wrote:
> On 2019/7/4 ??2:21, Tiwei Bie wrote:
> > On Thu, Jul 04, 2019 at 12:31:48PM +0800, Jason Wang wrote:
> > > On 2019/7/3 ??9:08, Tiwei Bie wrote:
> > > > On Wed, Jul 03, 2019 at 08:16:23PM +0800, Jason Wang wrote:
> > > > > On 2019/7/3 ??7:52, Tiwei Bie wrote:
> > > > > > On
2019 Jul 05
0
[RFC v2] vhost: introduce mdev based hardware vhost backend
...CE_FD in VFIO). And
> to setup the device, we can try to reuse the ioctls of the existing
> kernel vhost as much as possible.
Interesting, actually, I've considered something similar. I think there
should be no issues other than DMA:
- Need to invent new API for DMA mapping other than SET_MEM_TABLE?
(Which is too heavyweight).
- Need to consider a way to co-work with both on chip IOMMU (your
proposal should be fine) and scalable IOV.
Thanks
>
> Thanks,
> Tiwei
>
>> Thanks
>>
>>
>>> Thanks,
>>> Tiwei
>>>> Thanks
>>>>
2019 Jul 03
0
[RFC v2] vhost: introduce mdev based hardware vhost backend
...IO interrupt ioctl API is used to setup device interrupts.
> IRQ-bypass can also be supported.
>
> Currently, the data path interrupt can be configured via the
> VFIO_VHOST_VQ_IRQ_INDEX with virtqueue's callfd.
How about DMA API? Do you expect to use VFIO IOMMU API or using vhost
SET_MEM_TABLE? VFIO IOMMU API is more generic for sure but with
SET_MEM_TABLE DMA can be done at the level of parent device which means
it can work for e.g the card with on-chip IOMMU.
And what's the plan for vIOMMU?
>
> Signed-off-by: Tiwei Bie <tiwei.bie at intel.com>
> ---
> drive...
2019 Jul 03
4
[RFC v2] vhost: introduce mdev based hardware vhost backend
Details about this can be found here:
https://lwn.net/Articles/750770/
What's new in this version
==========================
A new VFIO device type is introduced - vfio-vhost. This addressed
some comments from here: https://patchwork.ozlabs.org/cover/984763/
Below is the updated device interface:
Currently, there are two regions of this device: 1) CONFIG_REGION
2019 Jul 03
4
[RFC v2] vhost: introduce mdev based hardware vhost backend
Details about this can be found here:
https://lwn.net/Articles/750770/
What's new in this version
==========================
A new VFIO device type is introduced - vfio-vhost. This addressed
some comments from here: https://patchwork.ozlabs.org/cover/984763/
Below is the updated device interface:
Currently, there are two regions of this device: 1) CONFIG_REGION
2019 Oct 22
0
[PATCH v2] vhost: introduce mdev based hardware backend
..._QUEUE_NUM:
> + r = vhost_mdev_get_queue_num(m, argp);
> + break;
It's not clear to me that how this API will be used by userspace? I
think e.g features without MQ implies the queue num here.
> + default:
> + r = vhost_dev_ioctl(&m->dev, cmd, argp);
I believe having SET_MEM_TABLE/SET_LOG_BASE/SET_LOG_FD? is for future
support of those features. If it's true need add some comments on this.
> + if (r == -ENOIOCTLCMD)
> + r = vhost_mdev_vring_ioctl(m, cmd, argp);
> + }
> +
> + mutex_unlock(&m->mutex);
> + return r;
> +}
> +
> +static...
2019 Oct 23
2
[PATCH v2] vhost: introduce mdev based hardware backend
...eturn
the supported number of queues. For virtio devices other
than virtio-net, can we always expect to have a fixed
default number of queues when there is no MQ feature?
>
>
> > + default:
> > + r = vhost_dev_ioctl(&m->dev, cmd, argp);
>
>
> I believe having SET_MEM_TABLE/SET_LOG_BASE/SET_LOG_FD? is for future
> support of those features. If it's true need add some comments on this.
OK.
>
>
> > + if (r == -ENOIOCTLCMD)
> > + r = vhost_mdev_vring_ioctl(m, cmd, argp);
> > + }
> > +
> > + mutex_unlock(&m->mutex);...