On Thu, Nov 11, 2021 at 3:59 PM Wang, Wei W <wei.w.wang at intel.com>
wrote:>
> On Wednesday, November 10, 2021 6:50 PM, Michael S. Tsirkin wrote:
> > On Wed, Nov 10, 2021 at 07:12:36AM +0000, Wang, Wei W wrote:
> >
> > hypercalls are fundamentally hypervisor dependent though.
>
> Yes, each hypervisor needs to support it.
> We could simplify the design and implementation to the minimal, so that
each hypervisor can easily support it.
> Once every hypervisor has the support, the guest (MigTD) could be a unified
version.
> (e.g. no need for each hypervisor user to develop their own MigTD using
their own vsock transport)
>
> > Assuming you can carve up a hypervisor independent hypercall, using it
for
> > something as mundane and specific as vsock for TDX seems like a huge
overkill.
> > For example, virtio could benefit from faster vmexits that hypercalls
give you
> > for signalling.
> > How about a combination of virtio-mmio and hypercalls for fast-path
signalling
> > then?
>
> We thought about virtio-mmio. There are some barriers:
> 1) It wasn't originally intended for x86 machines. The only machine
type in QEMU
> that supports it (to run on x86) is microvm. But "microvm"
doesn?t support TDX currently,
> and adding this support might need larger effort.
Can you explain why microvm needs larger effort? It looks to me it
fits for TDX perfectly since it has less attack surface.
Thanks
> 2) It's simpler than virtio-pci, but still more complex than hypercall.
> 3) Some CSPs don't have virtio support in their software, so this might
add too much development effort for them.
>
> This usage doesn?t need high performance, so faster hypercall for
signalling isn't required, I think.
> (but if hypercall has been verified to be much faster than the current EPT
misconfig based notification,
> it could be added for the general virtio usages)
>
> >
> > > 2) It is simpler. It doesn?t rely on any complex bus
enumeration
> > >
> > > (e.g. virtio-pci based vsock device may need the whole
implementation of
> > PCI).
> > >
> >
> > Next thing people will try to do is implement a bunch of other device
on top of
> > it. virtio used pci simply because everyone implements pci. And the
reason
> > for *that* is because implementing a basic pci bus is dead simple,
whole of
> > pci.c in qemu is <3000 LOC.
>
> This doesn?t include the PCI enumeration in seaBIOS and the PCI driver in
the guest though.
>
> Virtio has high performance, I think that's an important reason that
more devices are continually added.
> For this transport, I couldn?t envision that a bunch of devices would be
added. It's a simple PV method.
>
>
> >
> > >
> > > An example usage is the communication between MigTD and host
(Page 8
> > > at
> > >
> > > https://static.sched.com/hosted_files/kvmforum2021/ef/
> > > TDX%20Live%20Migration_Wei%20Wang.pdf).
> > >
> > > MigTD communicates to host to assist the migration of the target
(user) TD.
> > >
> > > MigTD is part of the TCB, so its implementation is expected to be
as
> > > simple as possible
> > >
> > > (e.g. bare mental implementation without OS, no PCI driver
support).
> > >
> > >
> >
> > Try to list drawbacks? For example, passthrough for nested virt
isn't possible
> > unlike pci, neither are hardware implementations.
> >
>
> Why hypercall wouldn't be possible for nested virt?
> L2 hypercall goes to L0 directly and L0 can decide whether to forward the
call the L1 (in our case, I think no need as the packet will go out), right?
>
> Its drawbacks are obvious (e.g. low performance).
> In general, I think it could be considered as a complement to virtio.
> I think most usages would choose virtio as they don?t worry about the
complexity and they purse high performance.
> For some special usages that think virtio is too complex to suffice and
they want something simpler, they would consider to use this transport?
>
> Thanks,
> Wei
>