Jeremie Le Hen
2008-Jan-16 08:29 UTC
[Xen-devel] Isolated Device Domain and I/O Spaces in Xen 3
Hi, I''ve already sent this e-mail to xen-users@ but I haven''t got any answer. So I dare to bug you here, hoping to have more luck. I may end up in writing some documentation. Is the wiki easily accessible to lambda users, or should I contact a more priviledged folk? I''ve read thoroughly the document entitled "Safe Hardware Access with the Xen Virtual Machine Monitor" [1]. - Regarding IDD: According to slide 49 in [2], Isolated Driver Domain (IDD) have only been implemented experimentaly but is not used at all in the current official releases of Xen 3. Are there any plans about this? - Regarding I/O Spaces: Currently, Virtual Block Devices (VBD) and Virtual (Network) Interfaces (VIF) are the most common way to provide storage and network devices within Xen PV guests. It is yet possible to assign exclusively a PCI device to any one DomU, in which case the DomU''s driver talks to the hardware through the "Safe Hardware Interface" (see figure 1 in [1]), enforced by I/O Spaces, as described in [1] section 4. Am I right? I have a few more questions coming up, if no one cares about me using this list for such purpose. Thank you. Best regards, [1] http://www.cl.cam.ac.uk/netos/papers/2004-oasis-ngio.pdf [2] http://www.cl.cam.ac.uk/netos/papers/2005-xen-ols.ppt -- Jeremie Le Hen < jeremie at le-hen dot org >< ttz at chchile dot org > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Pratt
2008-Jan-17 14:10 UTC
RE: [Xen-devel] Isolated Device Domain and I/O Spaces in Xen 3
> I''ve read thoroughly the document entitled "Safe Hardware Access with > the Xen Virtual Machine Monitor" [1]. > > - Regarding IDD: > According to slide 49 in [2], Isolated Driver Domain (IDD) have only > been implemented experimentaly but is not used at all in the current > official releases of Xen 3. Are there any plans about this?Xen has long had the capability to map h/w devices through to other domains, and plenty of folk use this. Some users then run backend drivers to virtualize the device to other guests, hence effectively making them IDDs. To make them true IDDs you''d have to write scripts to restart the domain on failure etc and have the devices rebind. The latter path is not well tested, but since its close to live relo it may just work.> - Regarding I/O Spaces: > Currently, Virtual Block Devices (VBD) and Virtual (Network)Interfaces> (VIF) are the most common way to provide storage and network devices > within Xen PV guests. It is yet possible to assign exclusively a PCI > device to any one DomU, in which case the DomU''s driver talks to the > hardware through the "Safe Hardware Interface" (see figure 1 in [1]), > enforced by I/O Spaces, as described in [1] section 4. Am I right?Yes. There''s even provisional support for passing PCI devices through to HVM domains on VT-d equipped hosts. Ian> I have a few more questions coming up, if no one cares about me using > this list for such purpose. > > Thank you. > Best regards, > > [1] http://www.cl.cam.ac.uk/netos/papers/2004-oasis-ngio.pdf > [2] http://www.cl.cam.ac.uk/netos/papers/2005-xen-ols.ppt > -- > Jeremie Le Hen > < jeremie at le-hen dot org >< ttz at chchile dot org > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel