> Jean-Eric Cuendet wrote:
>
> >
> >> Obviously, with a Xen-aware graphics driver, it would be
> possible to
> >> solve both of these problems. It is, however, not entirely
> trivial to
> >> write such a driver for a decent modern graphics card. Trust me, I
> >> used to work for 3DLabs as a driver developer, the source
> code for 2D
> >> side of the driver is several megabytes, and the 3D parts of the
> >> driver are MUCH larger than that. And I think an nVidia or
> ATI driver
> >> is even larger - at least the binary is...
> >
> >
> > So if I understand well, to solve that we need a Xen driver
> (the same
> > way VMWare does) that is then used by Windows. I understand
> that well.
> > IMO, accelerated graphics is not high priority, if the card
> can just
> > be shared by 2 domU OSes (Linux + Windows) with decent perf (like
> > VMWare) then Desktop with Xen is near.
> > Hope someone with knowledge wil do that soon!
> > Anyone to sponsor me to develop this? :-) Thanks for your great
> > answers.
> > -jec
>
>
> Xen offers vga emulation through a VNC backend. The
> Xen-Windows video driver you speak of would probably just be
> a VNC driver that turns graphic commands directly into VNC
> encoding rather than traversing the VGA emulation layer.
> Perhaps this beast already exists? Maybe this is impossible?
> I wonder what its performance would be compared to xen
> vga-emulation and the regular VNC server?
>
> All that said, I''m sure xen vga-emulation and regular VNC
> servers are good enough for most purposes.
>
> Dan.
>
There''s two distinct possibilities for graphics improvement using the
VNC model:
1. A more direct connection between the internal Windows format of
graphics request, and the output format - e.g. VNC commands directly
from a Windows driver.
2. A more intelligent and modern form of graphics commands. The curreng
VGA model inside QEMU is using a very basic form of VGA kernel. A more
modern approach may very well be able to perform "more work per
cpu-cycle". Older types of graphics cards are much more pixel oriented,
and although it may be possible to do a block of pixels, the commands
are pretty inefficient and "simple", which means that the processing
of
the graphics becomes quite inefficient, because every write to the
graphics card itself needs to be emulated. Modern graphics cards usually
have some sort of "command stream" that is essentially just a list of
operations in a memory buffer, with something signalling an interrupt at
the end of some group of commands. Obviously, memory writes need no
direct emulation - later on, when some portion of buffer is filled in,
the ownership of that part of memory buffer is transferred to the
emulation software, and it interprets the whole stream in one lot. [You
can do this with a SHARED memory buffer, as long as you strictly do not
write to the portion that is not owned by the graphics processing side
from the OS side - this is how the hardware version does things].
A non-VNC model:
You could also, conceivably, have a situation where the graphics
controller is directly handled by either Dom0 or the Guest, using a
"para-virtualized" driver, where the driver is Xen-aware, and has some
sort of "agreement" of how to share the hardware. This would require a
hypervisor level locking/sharing mechanism so that the driver of one
guest can do the necessary hardware access without being "disturbed"
by
some other guest trying to access some shared resource.
This is however, a BIG job, even for a pretty simple graphics card,
there''d need to be quite a lot of work. For COMPLEX graphics cards, the
driver is HUGE, and although the actual accesses to the hardware are
probably few and pretty centralized, the work to understand the driver,
modify it in a safe way, and getting it to work reliably (and
effectively) across multiple OS architectures is not easy.
--
Mats
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users