On 17-06-21, 11:54, Enrico Weigelt, metux IT consult
wrote:> Actually, I am subscribed in the list. We already had debates on it,
> including on your postings (but also other things).
Right.
> And the ascii
> version of the spec actually landed on the list last year, we had
> discussions about it there.
I tried to search for it earlier, but never found anything on virtio
list. Maybe I missed it then.
> I've just had the problem that my patches didn't go through, which
is
> very strange, since I actually am on the list and other mails of mine
> went through all the time. I'm now suspecting it's triggered by
some
> subtle difference between my regular mail clients and git send-email.
>
> > Since you started this all and still want to do it, I will take my
> > patches back and let you finish with what you started. I will help
> > review them.
>
> Thank you very much.
>
> Please don't me wrong, I really don't wanna any kind of power play,
just
> wanna get an technically good solution. If there had been any mis-
> understandings at that point, I'm officially saying sorry here.
Its okay, we are both trying to make things better here :)
> Let's be friends.
>
> You mentioned you've been missing with my spec. Please come foreward
and
> tell us what exactly you're missing and what your use cases are.
I have sent a detailed review of your spec patch, lets do it there
point by point :)
> Note that I've intentionally left off certain "more
sophisticated"
> functionality we find on *some* gpio controllers, eg. per-line irq
> masking, pinmux settings for several reasons, e.g.:
>
> * those are only implemented by some hardware
> * often implemented in or at least need to be coordinated with other
> pieces of hw (e.g. in SoCs, pinmux is usually done in a separate
> device)
> * it shall be possible to support even the most simple devices and
> have the more sophisticated things totally optional. minium
> requirements for silicon implementations should be the lowest possible
> (IOW: minimal number of logic gates)
>
> >> You sound like a politician that tries to push an hidden agenda,
> >> made by some secret interest group in the back room, against the
> >> people - like "resistance is futile".
> >
> > :)
>
> Perhaps I've been a bit overreacting at that point. But: this is really
> that kind of talking we hear from politicians and corporate leaders
> since many years, whenever they wanna push something through that we the
> people don't want. Politicians use that as a social engineering tool
for
> demotivating any resistance. Over heare in Germany this even had become
> a meme, and folks from CCC made a radio show about and named by that
> (the German word is "alternativlos" - in english: without any
> alternative). No idea about other countries, maybe it's a cultural
> issue, but over here, those kind of talking had become a red light.
>
> Of course, I never intended to accuse you of being one of these people.
> Sorry if there's been misunderstanding.
It sounded strange yesterday to be honest, but I have gone past it
already :)
> Let's get back to your implementation: you've mentioned you're
routing
> raw virtio traffic into userland, to some other process (outside VMMs
> like qemu) - how exactly are you doing that ?
>
> That could be interesting for completely different scenarios. For
> example, I'm currently exploring how to get VirGL running between
separate
> processes running under the same kernel instance (fow now we
> only have the driver side inside VM and the device outside it), means
> driver and device are running as separate processes.
>
> The primary use case are containers that shall have really GPU generic
> drivers, not knowing anything about the actual hardware on the host.
> Currently, container workloads wanting to use a GPU need to have special
> drivers for exactly the HW the host happens to have. This makes generic,
> portable container images a tuff problem.
>
> I haven't digged deeply into the matter, but some virtio-tap transport
> could be an relatively easy (probably not the most efficient) way to
> solve this problem. In that scanario it would like this:
>
> * we have a "virgl server" (could be some X or wayland
application, or
> completely own compositor) opens up the device-end of an
"virtio-tap"
> transport and attaches its virtio-gpio device emulation on it.
> * "virtio-tap" now creates a driver-end, kernel probes an
virtio-gpu
> instance on this (also leading to a new DRI device)
> * container runtime picks the new DRI device and maps it into the
> container(s)
> [ yet open question, whether one DRI device for many containers
> is enough ]
> * container application sees that virtio-gpu DRI device and speaks to
> it (mesa->virgl backend)
> * the "virgl-server" receives buffers and commands from via
virtio and
> sends them to the host's GL or Gallium API.
>
> Once we're already there, we might think whether it could make sense
> putting virtio routing into kvm itself, instead of letting qemu catch
> page faults and virtual irqs. Yet have to see whether that's a good
> idea, but I can imagine some performance improvements here.
We (at Linaro) work on software enablement normally and not end
products (rarely that happen though), like framework level work in the
kernel which can later be used by everyone to build their drivers on.
There are many companies like Qualcomm, ST Micro, etc, who want to use
Virtio in general for Automotive or other applications / solution. The
purpose of Project Stratos [1], an initiative of Linaro, is working
towards developing hypervisor agnostic Virtio interfaces and
standards. The end products and applications will be worked on by the
members directly and we need to add basic minimum support, with all
the generally required APIs or interfaces.
--
viresh
[1] https://linaro.atlassian.net/wiki/spaces/STR/overview