Michael S. Tsirkin
2018-Apr-19 18:40 UTC
[RFC] vhost: introduce mdev based hardware vhost backend
On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:> > > > One problem is that, different virtio ring compatible devices > > > > may have different device interfaces. That is to say, we will > > > > need different drivers in QEMU. It could be troublesome. And > > > > that's what this patch trying to fix. The idea behind this > > > > patch is very simple: mdev is a standard way to emulate device > > > > in kernel. > > > So you just move the abstraction layer from qemu to kernel, and you still > > > need different drivers in kernel for different device interfaces of > > > accelerators. This looks even more complex than leaving it in qemu. As you > > > said, another idea is to implement userspace vhost backend for accelerators > > > which seems easier and could co-work with other parts of qemu without > > > inventing new type of messages. > > I'm not quite sure. Do you think it's acceptable to > > add various vendor specific hardware drivers in QEMU? > > > > I don't object but we need to figure out the advantages of doing it in qemu > too. > > ThanksTo be frank kernel is exactly where device drivers belong. DPDK did move them to userspace but that's merely a requirement for data path. *If* you can have them in kernel that is best: - update kernel and there's no need to rebuild userspace - apps can be written in any language no need to maintain multiple libraries or add wrappers - security concerns are much smaller (ok people are trying to raise the bar with IOMMUs and such, but it's already pretty good even without) The biggest issue is that you let userspace poke at the device which is also allowed by the IOMMU to poke at kernel memory (needed for kernel driver to work). Yes, maybe if device is not buggy it's all fine, but it's better if we do not have to trust the device otherwise the security picture becomes more murky. I suggested attaching a PASID to (some) queues - see my old post "using PASIDs to enable a safe variant of direct ring access". Then using IOMMU with VFIO to limit access through queue to corrent ranges of memory. -- MST
On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote:> On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote: > > > > > One problem is that, different virtio ring compatible devices > > > > > may have different device interfaces. That is to say, we will > > > > > need different drivers in QEMU. It could be troublesome. And > > > > > that's what this patch trying to fix. The idea behind this > > > > > patch is very simple: mdev is a standard way to emulate device > > > > > in kernel. > > > > So you just move the abstraction layer from qemu to kernel, and you still > > > > need different drivers in kernel for different device interfaces of > > > > accelerators. This looks even more complex than leaving it in qemu. As you > > > > said, another idea is to implement userspace vhost backend for accelerators > > > > which seems easier and could co-work with other parts of qemu without > > > > inventing new type of messages. > > > I'm not quite sure. Do you think it's acceptable to > > > add various vendor specific hardware drivers in QEMU? > > > > > > > I don't object but we need to figure out the advantages of doing it in qemu > > too. > > > > Thanks > > To be frank kernel is exactly where device drivers belong. DPDK did > move them to userspace but that's merely a requirement for data path. > *If* you can have them in kernel that is best: > - update kernel and there's no need to rebuild userspace > - apps can be written in any language no need to maintain multiple > libraries or add wrappers > - security concerns are much smaller (ok people are trying to > raise the bar with IOMMUs and such, but it's already pretty > good even without) > > The biggest issue is that you let userspace poke at the > device which is also allowed by the IOMMU to poke at > kernel memory (needed for kernel driver to work).I think the device won't and shouldn't be allowed to poke at kernel memory. Its kernel driver needs some kernel memory to work. But the device doesn't have the access to them. Instead, the device only has the access to: (1) the entire memory of the VM (if vIOMMU isn't used) or (2) the memory belongs to the guest virtio device (if vIOMMU is being used). Below is the reason: For the first case, we should program the IOMMU for the hardware device based on the info in the memory table which is the entire memory of the VM. For the second case, we should program the IOMMU for the hardware device based on the info in the shadow page table of the vIOMMU. So the memory can be accessed by the device is limited, it should be safe especially for the second case. My concern is that, in this RFC, we don't program the IOMMU for the mdev device in the userspace via the VFIO API directly. Instead, we pass the memory table to the kernel driver via the mdev device (BAR0) and ask the driver to do the IOMMU programming. Someone may don't like it. The main reason why we don't program IOMMU via VFIO API in userspace directly is that, currently IOMMU drivers don't support mdev bus.> > Yes, maybe if device is not buggy it's all fine, but > it's better if we do not have to trust the device > otherwise the security picture becomes more murky. > > I suggested attaching a PASID to (some) queues - see my old post "using > PASIDs to enable a safe variant of direct ring access".It's pretty cool. We also have some similar ideas. Cunming will talk more about this. Best regards, Tiwei Bie> > Then using IOMMU with VFIO to limit access through queue to corrent > ranges of memory. > > > -- > MST
Michael S. Tsirkin
2018-Apr-20 03:50 UTC
[RFC] vhost: introduce mdev based hardware vhost backend
On Fri, Apr 20, 2018 at 11:28:07AM +0800, Tiwei Bie wrote:> On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote: > > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote: > > > > > > One problem is that, different virtio ring compatible devices > > > > > > may have different device interfaces. That is to say, we will > > > > > > need different drivers in QEMU. It could be troublesome. And > > > > > > that's what this patch trying to fix. The idea behind this > > > > > > patch is very simple: mdev is a standard way to emulate device > > > > > > in kernel. > > > > > So you just move the abstraction layer from qemu to kernel, and you still > > > > > need different drivers in kernel for different device interfaces of > > > > > accelerators. This looks even more complex than leaving it in qemu. As you > > > > > said, another idea is to implement userspace vhost backend for accelerators > > > > > which seems easier and could co-work with other parts of qemu without > > > > > inventing new type of messages. > > > > I'm not quite sure. Do you think it's acceptable to > > > > add various vendor specific hardware drivers in QEMU? > > > > > > > > > > I don't object but we need to figure out the advantages of doing it in qemu > > > too. > > > > > > Thanks > > > > To be frank kernel is exactly where device drivers belong. DPDK did > > move them to userspace but that's merely a requirement for data path. > > *If* you can have them in kernel that is best: > > - update kernel and there's no need to rebuild userspace > > - apps can be written in any language no need to maintain multiple > > libraries or add wrappers > > - security concerns are much smaller (ok people are trying to > > raise the bar with IOMMUs and such, but it's already pretty > > good even without) > > > > The biggest issue is that you let userspace poke at the > > device which is also allowed by the IOMMU to poke at > > kernel memory (needed for kernel driver to work). > > I think the device won't and shouldn't be allowed to > poke at kernel memory. Its kernel driver needs some > kernel memory to work. But the device doesn't have > the access to them. Instead, the device only has the > access to: > > (1) the entire memory of the VM (if vIOMMU isn't used) > or > (2) the memory belongs to the guest virtio device (if > vIOMMU is being used). > > Below is the reason: > > For the first case, we should program the IOMMU for > the hardware device based on the info in the memory > table which is the entire memory of the VM. > > For the second case, we should program the IOMMU for > the hardware device based on the info in the shadow > page table of the vIOMMU. > > So the memory can be accessed by the device is limited, > it should be safe especially for the second case. > > My concern is that, in this RFC, we don't program the > IOMMU for the mdev device in the userspace via the VFIO > API directly. Instead, we pass the memory table to the > kernel driver via the mdev device (BAR0) and ask the > driver to do the IOMMU programming. Someone may don't > like it. The main reason why we don't program IOMMU via > VFIO API in userspace directly is that, currently IOMMU > drivers don't support mdev bus.But it is a pci device after all, isn't it? IOMMU drivers certainly support that ... Another issue with this approach is that internal kernel issues leak out to the interface.> > > > Yes, maybe if device is not buggy it's all fine, but > > it's better if we do not have to trust the device > > otherwise the security picture becomes more murky. > > > > I suggested attaching a PASID to (some) queues - see my old post "using > > PASIDs to enable a safe variant of direct ring access". > > It's pretty cool. We also have some similar ideas. > Cunming will talk more about this. > > Best regards, > Tiwei BieAn extra benefit to this could be that requests with PASID undergo an extra level of translation. We could use it to avoid the need for shadowing on intel. Something like this: - expose to guest a standard virtio device (no pasid support) - back it by virtio device with pasid support on the host by attaching same pasid to all queues now - guest will build 1 level of page tables we build first level page tables for requests with pasid and point the IOMMU to use the guest supplied page tables for the second level of translation. Now we do need to forward invalidations but we no longer need to set the CM bit and shadow valid entries.> > > > Then using IOMMU with VFIO to limit access through queue to corrent > > ranges of memory. > > > > > > -- > > MST
Liang, Cunming
2018-Apr-20 03:50 UTC
[RFC] vhost: introduce mdev based hardware vhost backend
> -----Original Message----- > From: Bie, Tiwei > Sent: Friday, April 20, 2018 11:28 AM > To: Michael S. Tsirkin <mst at redhat.com> > Cc: Jason Wang <jasowang at redhat.com>; alex.williamson at redhat.com; > ddutile at redhat.com; Duyck, Alexander H <alexander.h.duyck at intel.com>; > virtio-dev at lists.oasis-open.org; linux-kernel at vger.kernel.org; > kvm at vger.kernel.org; virtualization at lists.linux-foundation.org; > netdev at vger.kernel.org; Daly, Dan <dan.daly at intel.com>; Liang, Cunming > <cunming.liang at intel.com>; Wang, Zhihong <zhihong.wang at intel.com>; Tan, > Jianfeng <jianfeng.tan at intel.com>; Wang, Xiao W <xiao.w.wang at intel.com>; > Tian, Kevin <kevin.tian at intel.com> > Subject: Re: [RFC] vhost: introduce mdev based hardware vhost backend > > On Thu, Apr 19, 2018 at 09:40:23PM +0300, Michael S. Tsirkin wrote: > > On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote: > > > > > > One problem is that, different virtio ring compatible devices > > > > > > may have different device interfaces. That is to say, we will > > > > > > need different drivers in QEMU. It could be troublesome. And > > > > > > that's what this patch trying to fix. The idea behind this > > > > > > patch is very simple: mdev is a standard way to emulate device > > > > > > in kernel. > > > > > So you just move the abstraction layer from qemu to kernel, and > > > > > you still need different drivers in kernel for different device > > > > > interfaces of accelerators. This looks even more complex than > > > > > leaving it in qemu. As you said, another idea is to implement > > > > > userspace vhost backend for accelerators which seems easier and > > > > > could co-work with other parts of qemu without inventing new type of > messages. > > > > I'm not quite sure. Do you think it's acceptable to add various > > > > vendor specific hardware drivers in QEMU? > > > > > > > > > > I don't object but we need to figure out the advantages of doing it > > > in qemu too. > > > > > > Thanks > > > > To be frank kernel is exactly where device drivers belong. DPDK did > > move them to userspace but that's merely a requirement for data path. > > *If* you can have them in kernel that is best: > > - update kernel and there's no need to rebuild userspace > > - apps can be written in any language no need to maintain multiple > > libraries or add wrappers > > - security concerns are much smaller (ok people are trying to > > raise the bar with IOMMUs and such, but it's already pretty > > good even without) > > > > The biggest issue is that you let userspace poke at the device which > > is also allowed by the IOMMU to poke at kernel memory (needed for > > kernel driver to work). > > I think the device won't and shouldn't be allowed to poke at kernel memory. Its > kernel driver needs some kernel memory to work. But the device doesn't have > the access to them. Instead, the device only has the access to: > > (1) the entire memory of the VM (if vIOMMU isn't used) or > (2) the memory belongs to the guest virtio device (if > vIOMMU is being used). > > Below is the reason: > > For the first case, we should program the IOMMU for the hardware device based > on the info in the memory table which is the entire memory of the VM. > > For the second case, we should program the IOMMU for the hardware device > based on the info in the shadow page table of the vIOMMU. > > So the memory can be accessed by the device is limited, it should be safe > especially for the second case. > > My concern is that, in this RFC, we don't program the IOMMU for the mdev > device in the userspace via the VFIO API directly. Instead, we pass the memory > table to the kernel driver via the mdev device (BAR0) and ask the driver to do the > IOMMU programming. Someone may don't like it. The main reason why we don't > program IOMMU via VFIO API in userspace directly is that, currently IOMMU > drivers don't support mdev bus. > > > > > Yes, maybe if device is not buggy it's all fine, but it's better if we > > do not have to trust the device otherwise the security picture becomes > > more murky. > > > > I suggested attaching a PASID to (some) queues - see my old post > > "using PASIDs to enable a safe variant of direct ring access". >Ideally we can have a device binding with normal driver in host, meanwhile support to allocate a few queues attaching with PASID on-demand. By vhost mdev transport channel, the data path ability of queues(as a device) can expose to qemu vhost adaptor as a vDPA instance. Then we can avoid VF number limitation, providing vhost data path acceleration in a small granularity.> It's pretty cool. We also have some similar ideas. > Cunming will talk more about this. > > Best regards, > Tiwei Bie > > > > > Then using IOMMU with VFIO to limit access through queue to corrent > > ranges of memory. > > > > > > -- > > MST
On 2018?04?20? 02:40, Michael S. Tsirkin wrote:> On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote: >>>>> One problem is that, different virtio ring compatible devices >>>>> may have different device interfaces. That is to say, we will >>>>> need different drivers in QEMU. It could be troublesome. And >>>>> that's what this patch trying to fix. The idea behind this >>>>> patch is very simple: mdev is a standard way to emulate device >>>>> in kernel. >>>> So you just move the abstraction layer from qemu to kernel, and you still >>>> need different drivers in kernel for different device interfaces of >>>> accelerators. This looks even more complex than leaving it in qemu. As you >>>> said, another idea is to implement userspace vhost backend for accelerators >>>> which seems easier and could co-work with other parts of qemu without >>>> inventing new type of messages. >>> I'm not quite sure. Do you think it's acceptable to >>> add various vendor specific hardware drivers in QEMU? >>> >> I don't object but we need to figure out the advantages of doing it in qemu >> too. >> >> Thanks > To be frank kernel is exactly where device drivers belong. DPDK did > move them to userspace but that's merely a requirement for data path. > *If* you can have them in kernel that is best: > - update kernel and there's no need to rebuild userspaceWell, you still need to rebuild userspace since a new vhost backend is required which relies vhost protocol through mdev API. And I believe upgrading userspace package is considered to be more lightweight than upgrading kernel. With mdev, we're likely to repeat the story of vhost API, dealing with features/versions and inventing new API endless for new features. And you will still need to rebuild the userspace.> - apps can be written in any language no need to maintain multiple > libraries or add wrappersThis is not a big issue consider It's not a generic network driver but a mdev driver, the only possible user is VM.> - security concerns are much smaller (ok people are trying to > raise the bar with IOMMUs and such, but it's already pretty > good even without)Well, I think not, kernel bugs are much more serious than userspace ones. And I beg the kernel driver itself won't be small.> > The biggest issue is that you let userspace poke at the > device which is also allowed by the IOMMU to poke at > kernel memory (needed for kernel driver to work).I don't quite get. The userspace driver could be built on top of VFIO for sure. So kernel memory were perfectly isolated in this case.> > Yes, maybe if device is not buggy it's all fine, but > it's better if we do not have to trust the device > otherwise the security picture becomes more murky. > > I suggested attaching a PASID to (some) queues - see my old post "using > PASIDs to enable a safe variant of direct ring access". > > Then using IOMMU with VFIO to limit access through queue to corrent > ranges of memory.Well userspace driver could benefit from this too. And we can even go further by using nested IO page tables to share IOVA address space between devices and a VM. Thanks
Michael S. Tsirkin
2018-Apr-20 14:12 UTC
[RFC] vhost: introduce mdev based hardware vhost backend
On Fri, Apr 20, 2018 at 11:52:47AM +0800, Jason Wang wrote:> > The biggest issue is that you let userspace poke at the > > device which is also allowed by the IOMMU to poke at > > kernel memory (needed for kernel driver to work). > > I don't quite get. The userspace driver could be built on top of VFIO for > sure. So kernel memory were perfectly isolated in this case.VFIO does what it can but it mostly just has the IOMMU to play with. So don't overestimate what it can do - it assumes a high level of spec compliance for protections to work. For example, ATS is enabled by default if device has it, and that treats translated requests are trusted. FLS is assumed to reset the device for when VFIO is unbound from the device. etc.> > > > Yes, maybe if device is not buggy it's all fine, but > > it's better if we do not have to trust the device > > otherwise the security picture becomes more murky. > > > > I suggested attaching a PASID to (some) queues - see my old post "using > > PASIDs to enable a safe variant of direct ring access". > > > > Then using IOMMU with VFIO to limit access through queue to corrent > > ranges of memory. > > Well userspace driver could benefit from this too. And we can even go > further by using nested IO page tables to share IOVA address space between > devices and a VM. > > ThanksYes I suggested this separately. -- MST
Apparently Analagous Threads
- [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
- [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
- [RFC] vhost: introduce mdev based hardware vhost backend
- [RFC] vhost: introduce mdev based hardware vhost backend
- [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend