Hi Full virtualization is about providing multiple virtual ISA level environments and mapping them to a single physical one. One particular aspect of this mapping are I/O instructions (explicit or mmapped I/O). In general, there are two strategies to partition the devices, either in time or in space. Partitioning a device in space means that the device (or a part of it) is exclusively available to a single VM. Partitioning a device in time (or time multiplexing) means that it can be used by multiple VMs but only one VM may use it at any point in time. I am trying to understand how I/O virtualization on the ISA level works if a device is shared between multiple VM instances. On a very high level, it should be as follows. First of all, the VMM has to intercept the VM''s I/O commands (I/O instructions or load/store to dedicated memory addresses - let''s ignore interrupts for the moment). This could be done by traps or by replacing the resp. instructions by VMM calls to I/O primitives. The VMM keeps multiple device model instances (one for each VM using the device) in memory. The models somehow reflect the low level I/O API of the device. Depending on which I/O command is issued by the VM, either the memory model is changed or a number of I/O instructions are executed to make the physical device state reflect the one represented in the memory model. This approach brings up a number of questions. It would be great if some of the virtualization experts here could shed some light on them (even though they are not immediately related to Xen, I know): - How do these device memory models look like? Is there a common (automata) theory behind or are they done ad hoc? - What kind of strategies/algorithms are used in the merge phase, i.e. the phase where the virtual memory model and the physical one are synchronized? What kind of problems can occur in this phase? - Are specific usage patterns used in real world implementations (e.g. VMWare) to simplify the virtualization (model or merge phase)? - Do you have any interesting pointers to literature dealing with full I/O virtualization? In particular, how does VMWare''s full virtualization works with respect to I/O? - Is every device time partitionable? If not, which requirements does it have to meet to be time partitionable? -> I don''t think every device is. What about a device which supports different modes of operation. If two VMs drive the virtual device in different modes, it may not be possible to constantly switch between them. Ok, this is pretty artificial. Thanks a lot for your help! Best wishes Thomas _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi Full virtualization is about providing multiple virtual ISA level environments and mapping them to a single physical one. One particular aspect of this mapping are I/O instructions (explicit or mmapped I/O). In general, there are two strategies to partition the devices, either in time or in space. Partitioning a device in space means that the device (or a part of it) is exclusively available to a single VM. Partitioning a device in time (or time multiplexing) means that it can be used by multiple VMs but only one VM may use it at any point in time. I am trying to understand how I/O virtualization on the ISA level works if a device is shared between multiple VM instances. On a very high level, it should be as follows. First of all, the VMM has to intercept the VM''s I/O commands (I/O instructions or load/store to dedicated memory addresses - let''s ignore interrupts for the moment). This could be done by traps or by replacing the resp. instructions by VMM calls to I/O primitives. The VMM keeps multiple device model instances (one for each VM using the device) in memory. The models somehow reflect the low level I/O API of the device. Depending on which I/O command is issued by the VM, either the memory model is changed or a number of I/O instructions are executed to make the physical device state reflect the one represented in the memory model. This approach brings up a number of questions. It would be great if some of the virtualization experts here could shed some light on them (even though they are not immediately related to Xen, I know): - How do these device memory models look like? Is there a common (automata) theory behind or are they done ad hoc? - What kind of strategies/algorithms are used in the merge phase, i.e. the phase where the virtual memory model and the physical one are synchronized? What kind of problems can occur in this phase? - Are specific usage patterns used in real world implementations (e.g. VMWare) to simplify the virtualization (model or merge phase)? - Do you have any interesting pointers to literature dealing with full I/O virtualization? In particular, how does VMWare''s full virtualization works with respect to I/O? - Is every device time partitionable? If not, which requirements does it have to meet to be time partitionable? -> I don''t think every device is. What about a device which supports different modes of operation. If two VMs drive the virtual device in different modes, it may not be possible to constantly switch between them. Ok, this is pretty artificial. Thanks a lot for your help! Best wishes Thomas _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message----- > From: xen-devel-bounces@lists.xensource.com > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of > Thomas Heinz > Sent: 20 November 2006 23:39 > To: xen-devel@lists.xensource.com > Subject: [Xen-devel] Full virtualization and I/O > > Hi > > Full virtualization is about providing multiple virtual ISA level > environments and mapping them to a single physical one. One > particular > aspect of this mapping are I/O instructions (explicit or > mmapped I/O). In > general, there are two strategies to partition the devices, > either in time > or in space. Partitioning a device in space means that the > device (or a > part of it) is exclusively available to a single VM. > Partitioning a device > in time (or time multiplexing) means that it can be used by > multiple VMs > but only one VM may use it at any point in time.The Xen approach is to not allow any sharing of devices, a device is owned by one domain, no other domain can directly access the device. There is a protocol of so called frontend/backend driver which is basically a dummy-device that forwards a request to another domain (normally domain 0) and the other half of the driver-pair is picking up this data, forwards it to some processing task, that then sends the packet onto the real hardware. For fully virtualized mode (hardware supported virtual machine, such as AMD-V or Intel VT, aka HVM), there is a different model, where a "device model" is involved to perform the hardware modelling. In Xen, this is using a modified version of qemu (called qemu-dm), which has a fairly complete set of "hardware" in it''s model. It''s got for example IDE controller, several types of network devices, graphics and mouse/keyboard models. The things you''d usually find in a PC, that is. The way it works is that the hypervisor intercepts IOIO and memory mapped IO regions that match the devices involved (such as the A0000-BFFFF region for VGA frame buffer memory or the 0x1F0-0x1F7 IO ports for the IDE controller), and forwards a request from the hypervisor to qemu-dm, where the operation changes the current state, and when it''s necessary, the state-change will result in for example a read-request to the "hard-disk" (which may be a real disk, a file on a local disk, or a file on a network storage device, to give some examples). There is also the option of using the frontend drivers as described above in the fully virtualized model. Finally, while I''m on the subject of fully virtualized mode: It is currently not possible to give a DMA-based device to a fully-virtualized domain. The reason for this is that the guest OS will have been told that memory is from 0..256MB (say), and it''s actual machine physical address is at 256MB..512MB. The OS is completely unaware of this "mismatch". So the OS will perform some operation to take a virtual address of some buffer (say a network packet) and make it into a "physical address", which will be an address in the range of 0..256MB. This will of course (at least) lead to the wrong data being transmitted, as the address of the actual data is somewhere in the range 256MB..512MB. The only solution to this is to have an IOMMU, which can translate the guest''s understanding of a physical address (0..256MB) to a machine physical address (256..512MB).> > I am trying to understand how I/O virtualization on the ISA > level works if > a device is shared between multiple VM instances. On a very > high level, it > should be as follows. First of all, the VMM has to intercept > the VM''s I/O > commands (I/O instructions or load/store to dedicated memory > addresses - > let''s ignore interrupts for the moment). This could be done > by traps or by > replacing the resp. instructions by VMM calls to I/O > primitives. The VMM > keeps multiple device model instances (one for each VM using > the device) > in memory. The models somehow reflect the low level I/O API > of the device. > Depending on which I/O command is issued by the VM, either the memory > model is changed or a number of I/O instructions are executed > to make the > physical device state reflect the one represented in the memory model.Do you by ISA mean "Instruction Set Architecture" or something else (I presume it''s NOT meaning ISA-bus...)? Intercepting IOIO instructions or MMIO instructions is not that hard - in HVM the two processor architectures have specific intercepts and bitmaps to indicate which IO instructions should be intercepted. MMIO will require the page-tables to be set up such that the memory mapped region is mapped "not present" so that any operation to this region gives a page-fault, and then the page-fault is analyzed to see if it''s for a MMIO address or for a "real page fault". For para-virtualization, the model is similar, but the exact model of how to intercept the IOIO or MMIO instruction is slightly different - but in essence it''s the same principle. Let me know if you really need to know how Xen goes about doing this, as it''s quite complicated (more so than the HVM version, for sure).> > This approach brings up a number of questions. It would be > great if some of > the virtualization experts here could shed some light on them > (even though > they are not immediately related to Xen, I know): > > - How do these device memory models look like? Is there a common > (automata) theory behind or are they done ad hoc?Not sure what you''re asking for here. Since the devices are either modeled after a REAL device (qemu-dm) and as such will resemble as closely as possible the REAL hardware device that it''s emulating, or in the frontend/backend driver, there is an "idealized model", such that the request contains just the basic data that the OS provides normally to the driver, and it''s placed in a queue with a message-signaling system to tell the other side that it''s got something in the queue.> - What kind of strategies/algorithms are used in the merge > phase, i.e. the > phase where the virtual memory model and the physical one are > synchronized? What kind of problems can occur in this phase?The Xen approach is to avoid this by only giving one device to each machine.> - Are specific usage patterns used in real world implementations (e.g. > VMWare) to simplify the virtualization (model or merge phase)?This is probably the wrong list to ask detailed questions about how VMWare works... ;-)> - Do you have any interesting pointers to literature dealing > with full I/O > virtualization? In particular, how does VMWare''s full virtualization > works with respect to I/O?Again, wrong list for VMWare questions.> - Is every device time partitionable? If not, which > requirements does it > have to meet to be time partitionable?Certainly not - I would say that almost all devices are NOT time partitionable, as the state in the device is dependant on the current usage. The more complex the device is, the more likely it is to have difficulties, but even such a simple deevice as a serial port would struggle to work in a time-shared fashion (not to mention that serial ports generally are used for multiple transactions to make a whole "bigger picture transaction", so for example a web-server connected via a serial modem would send a packet of several hundred bytes to the serial port driver, which is then portioned out as and when the serial port is ready to send another few bytes. If you switch from one guest to another during this process, and the second guest also has something to send on the serial port, you''d end up with a very scrambled message from the first guest and quite likely the second guests message completely lost!). There are some devices that are specifically built to manage multiple hosts, but other than that, any sharing of a device requires some software to gather up "a full transaction" and then sending that to the actual hardware, often also waiting for the transaction to complete (for example the interrupt signal to say that the hard disk write is complete).> -> I don''t think every device is. What about a device which supports > different modes of operation. If two VMs drive the > virtual device in > different modes, it may not be possible to constantly > switch between > them. Ok, this is pretty artificial.A particular problem is devices where you can''t necessarily read back the last mode-setting, which may well be the case in many different devices. You can''t, for example, read back all the registers on an IDE device, because the read of a particular address amy give the status rather than the current comamnd sent, or some such. -- Mats> > Thanks a lot for your help! > > > Best wishes > > Thomas > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi Mats, Thanks for your explanation in such details. As you mentioned in your post, could you elaborate using unmodified driver in HVM domain (i.e. using front-end driver in full-virtualized domain)? Do you think para-virtualized domain will have exactly the same behavior as full-virtualized domain when both of them are using this unmodified driver to access virtual block devices? Best regards, Liang ----- Original Message ----- From: "Petersson, Mats" <Mats.Petersson@amd.com> To: "Thomas Heinz" <thomasheinz@gmx.net>; <xen-devel@lists.xensource.com> Sent: Wednesday, November 22, 2006 9:24 AM Subject: RE: [Xen-devel] Full virtualization and I/O> -----Original Message----- > From: xen-devel-bounces@lists.xensource.com > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of > Thomas Heinz > Sent: 20 November 2006 23:39 > To: xen-devel@lists.xensource.com > Subject: [Xen-devel] Full virtualization and I/O > > Hi > > Full virtualization is about providing multiple virtual ISA level > environments and mapping them to a single physical one. One > particular > aspect of this mapping are I/O instructions (explicit or > mmapped I/O). In > general, there are two strategies to partition the devices, > either in time > or in space. Partitioning a device in space means that the > device (or a > part of it) is exclusively available to a single VM. > Partitioning a device > in time (or time multiplexing) means that it can be used by > multiple VMs > but only one VM may use it at any point in time.The Xen approach is to not allow any sharing of devices, a device is owned by one domain, no other domain can directly access the device. There is a protocol of so called frontend/backend driver which is basically a dummy-device that forwards a request to another domain (normally domain 0) and the other half of the driver-pair is picking up this data, forwards it to some processing task, that then sends the packet onto the real hardware. For fully virtualized mode (hardware supported virtual machine, such as AMD-V or Intel VT, aka HVM), there is a different model, where a "device model" is involved to perform the hardware modelling. In Xen, this is using a modified version of qemu (called qemu-dm), which has a fairly complete set of "hardware" in it''s model. It''s got for example IDE controller, several types of network devices, graphics and mouse/keyboard models. The things you''d usually find in a PC, that is. The way it works is that the hypervisor intercepts IOIO and memory mapped IO regions that match the devices involved (such as the A0000-BFFFF region for VGA frame buffer memory or the 0x1F0-0x1F7 IO ports for the IDE controller), and forwards a request from the hypervisor to qemu-dm, where the operation changes the current state, and when it''s necessary, the state-change will result in for example a read-request to the "hard-disk" (which may be a real disk, a file on a local disk, or a file on a network storage device, to give some examples). There is also the option of using the frontend drivers as described above in the fully virtualized model. Finally, while I''m on the subject of fully virtualized mode: It is currently not possible to give a DMA-based device to a fully-virtualized domain. The reason for this is that the guest OS will have been told that memory is from 0..256MB (say), and it''s actual machine physical address is at 256MB..512MB. The OS is completely unaware of this "mismatch". So the OS will perform some operation to take a virtual address of some buffer (say a network packet) and make it into a "physical address", which will be an address in the range of 0..256MB. This will of course (at least) lead to the wrong data being transmitted, as the address of the actual data is somewhere in the range 256MB..512MB. The only solution to this is to have an IOMMU, which can translate the guest''s understanding of a physical address (0..256MB) to a machine physical address (256..512MB).> > I am trying to understand how I/O virtualization on the ISA > level works if > a device is shared between multiple VM instances. On a very > high level, it > should be as follows. First of all, the VMM has to intercept > the VM''s I/O > commands (I/O instructions or load/store to dedicated memory > addresses - > let''s ignore interrupts for the moment). This could be done > by traps or by > replacing the resp. instructions by VMM calls to I/O > primitives. The VMM > keeps multiple device model instances (one for each VM using > the device) > in memory. The models somehow reflect the low level I/O API > of the device. > Depending on which I/O command is issued by the VM, either the memory > model is changed or a number of I/O instructions are executed > to make the > physical device state reflect the one represented in the memory model.Do you by ISA mean "Instruction Set Architecture" or something else (I presume it''s NOT meaning ISA-bus...)? Intercepting IOIO instructions or MMIO instructions is not that hard - in HVM the two processor architectures have specific intercepts and bitmaps to indicate which IO instructions should be intercepted. MMIO will require the page-tables to be set up such that the memory mapped region is mapped "not present" so that any operation to this region gives a page-fault, and then the page-fault is analyzed to see if it''s for a MMIO address or for a "real page fault". For para-virtualization, the model is similar, but the exact model of how to intercept the IOIO or MMIO instruction is slightly different - but in essence it''s the same principle. Let me know if you really need to know how Xen goes about doing this, as it''s quite complicated (more so than the HVM version, for sure).> > This approach brings up a number of questions. It would be > great if some of > the virtualization experts here could shed some light on them > (even though > they are not immediately related to Xen, I know): > > - How do these device memory models look like? Is there a common > (automata) theory behind or are they done ad hoc?Not sure what you''re asking for here. Since the devices are either modeled after a REAL device (qemu-dm) and as such will resemble as closely as possible the REAL hardware device that it''s emulating, or in the frontend/backend driver, there is an "idealized model", such that the request contains just the basic data that the OS provides normally to the driver, and it''s placed in a queue with a message-signaling system to tell the other side that it''s got something in the queue.> - What kind of strategies/algorithms are used in the merge > phase, i.e. the > phase where the virtual memory model and the physical one are > synchronized? What kind of problems can occur in this phase?The Xen approach is to avoid this by only giving one device to each machine.> - Are specific usage patterns used in real world implementations (e.g. > VMWare) to simplify the virtualization (model or merge phase)?This is probably the wrong list to ask detailed questions about how VMWare works... ;-)> - Do you have any interesting pointers to literature dealing > with full I/O > virtualization? In particular, how does VMWare''s full virtualization > works with respect to I/O?Again, wrong list for VMWare questions.> - Is every device time partitionable? If not, which > requirements does it > have to meet to be time partitionable?Certainly not - I would say that almost all devices are NOT time partitionable, as the state in the device is dependant on the current usage. The more complex the device is, the more likely it is to have difficulties, but even such a simple deevice as a serial port would struggle to work in a time-shared fashion (not to mention that serial ports generally are used for multiple transactions to make a whole "bigger picture transaction", so for example a web-server connected via a serial modem would send a packet of several hundred bytes to the serial port driver, which is then portioned out as and when the serial port is ready to send another few bytes. If you switch from one guest to another during this process, and the second guest also has something to send on the serial port, you''d end up with a very scrambled message from the first guest and quite likely the second guests message completely lost!). There are some devices that are specifically built to manage multiple hosts, but other than that, any sharing of a device requires some software to gather up "a full transaction" and then sending that to the actual hardware, often also waiting for the transaction to complete (for example the interrupt signal to say that the hard disk write is complete).> -> I don''t think every device is. What about a device which supports > different modes of operation. If two VMs drive the > virtual device in > different modes, it may not be possible to constantly > switch between > them. Ok, this is pretty artificial.A particular problem is devices where you can''t necessarily read back the last mode-setting, which may well be the case in many different devices. You can''t, for example, read back all the registers on an IDE device, because the read of a particular address amy give the status rather than the current comamnd sent, or some such. -- Mats> > Thanks a lot for your help! > > > Best wishes > > Thomas > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message----- > From: Liang Yang [mailto:multisyncfe991@hotmail.com] > Sent: 22 November 2006 16:51 > To: Petersson, Mats > Cc: Thomas Heinz; xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] Full virtualization and I/O > > Hi Mats, > > Thanks for your explanation in such details. > > As you mentioned in your post, could you elaborate using > unmodified driver > in HVM domain (i.e. using front-end driver in > full-virtualized domain)? Do > you think para-virtualized domain will have exactly the same > behavior as > full-virtualized domain when both of them are using this > unmodified driver > to access virtual block devices?Not sure exactly what you''re asking, but if you''re asking if the performance of driver-related work will be approximately the same, yes. By the way, I wouldn''t call that an "unmodified" driver - it is definitely a MODIFIED driver (a para-virtual driver). -- Mats> > Best regards, > > Liang > > ----- Original Message ----- > From: "Petersson, Mats" <Mats.Petersson@amd.com> > To: "Thomas Heinz" <thomasheinz@gmx.net>; > <xen-devel@lists.xensource.com> > Sent: Wednesday, November 22, 2006 9:24 AM > Subject: RE: [Xen-devel] Full virtualization and I/O > > > > -----Original Message----- > > From: xen-devel-bounces@lists.xensource.com > > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of > > Thomas Heinz > > Sent: 20 November 2006 23:39 > > To: xen-devel@lists.xensource.com > > Subject: [Xen-devel] Full virtualization and I/O > > > > Hi > > > > Full virtualization is about providing multiple virtual ISA level > > environments and mapping them to a single physical one. One > > particular > > aspect of this mapping are I/O instructions (explicit or > > mmapped I/O). In > > general, there are two strategies to partition the devices, > > either in time > > or in space. Partitioning a device in space means that the > > device (or a > > part of it) is exclusively available to a single VM. > > Partitioning a device > > in time (or time multiplexing) means that it can be used by > > multiple VMs > > but only one VM may use it at any point in time. > > The Xen approach is to not allow any sharing of devices, a device is > owned by one domain, no other domain can directly access the device. > There is a protocol of so called frontend/backend driver which is > basically a dummy-device that forwards a request to another domain > (normally domain 0) and the other half of the driver-pair is > picking up > this data, forwards it to some processing task, that then sends the > packet onto the real hardware. > > For fully virtualized mode (hardware supported virtual > machine, such as > AMD-V or Intel VT, aka HVM), there is a different model, > where a "device > model" is involved to perform the hardware modelling. In Xen, this is > using a modified version of qemu (called qemu-dm), which has a fairly > complete set of "hardware" in it''s model. It''s got for example IDE > controller, several types of network devices, graphics and > mouse/keyboard models. The things you''d usually find in a PC, that is. > The way it works is that the hypervisor intercepts IOIO and memory > mapped IO regions that match the devices involved (such as the > A0000-BFFFF region for VGA frame buffer memory or the 0x1F0-0x1F7 IO > ports for the IDE controller), and forwards a request from the > hypervisor to qemu-dm, where the operation changes the current state, > and when it''s necessary, the state-change will result in for example a > read-request to the "hard-disk" (which may be a real disk, a file on a > local disk, or a file on a network storage device, to give some > examples). > > There is also the option of using the frontend drivers as described > above in the fully virtualized model. > > Finally, while I''m on the subject of fully virtualized mode: It is > currently not possible to give a DMA-based device to a > fully-virtualized > domain. The reason for this is that the guest OS will have been told > that memory is from 0..256MB (say), and it''s actual machine physical > address is at 256MB..512MB. The OS is completely unaware of this > "mismatch". So the OS will perform some operation to take a virtual > address of some buffer (say a network packet) and make it into a > "physical address", which will be an address in the range of 0..256MB. > This will of course (at least) lead to the wrong data being > transmitted, > as the address of the actual data is somewhere in the range > 256MB..512MB. The only solution to this is to have an IOMMU, which can > translate the guest''s understanding of a physical address > (0..256MB) to > a machine physical address (256..512MB). > > > > > I am trying to understand how I/O virtualization on the ISA > > level works if > > a device is shared between multiple VM instances. On a very > > high level, it > > should be as follows. First of all, the VMM has to intercept > > the VM''s I/O > > commands (I/O instructions or load/store to dedicated memory > > addresses - > > let''s ignore interrupts for the moment). This could be done > > by traps or by > > replacing the resp. instructions by VMM calls to I/O > > primitives. The VMM > > keeps multiple device model instances (one for each VM using > > the device) > > in memory. The models somehow reflect the low level I/O API > > of the device. > > Depending on which I/O command is issued by the VM, either > the memory > > model is changed or a number of I/O instructions are executed > > to make the > > physical device state reflect the one represented in the > memory model. > > Do you by ISA mean "Instruction Set Architecture" or something else (I > presume it''s NOT meaning ISA-bus...)? > > Intercepting IOIO instructions or MMIO instructions is not that hard - > in HVM the two processor architectures have specific intercepts and > bitmaps to indicate which IO instructions should be intercepted. MMIO > will require the page-tables to be set up such that the memory mapped > region is mapped "not present" so that any operation to this region > gives a page-fault, and then the page-fault is analyzed to see if it''s > for a MMIO address or for a "real page fault". > > For para-virtualization, the model is similar, but the exact model of > how to intercept the IOIO or MMIO instruction is slightly different - > but in essence it''s the same principle. Let me know if you really need > to know how Xen goes about doing this, as it''s quite complicated (more > so than the HVM version, for sure). > > > > > > This approach brings up a number of questions. It would be > > great if some of > > the virtualization experts here could shed some light on them > > (even though > > they are not immediately related to Xen, I know): > > > > - How do these device memory models look like? Is there a common > > (automata) theory behind or are they done ad hoc? > > Not sure what you''re asking for here. Since the devices are either > modeled after a REAL device (qemu-dm) and as such will resemble as > closely as possible the REAL hardware device that it''s > emulating, or in > the frontend/backend driver, there is an "idealized model", such that > the request contains just the basic data that the OS provides normally > to the driver, and it''s placed in a queue with a message-signaling > system to tell the other side that it''s got something in the queue. > > > - What kind of strategies/algorithms are used in the merge > > phase, i.e. the > > phase where the virtual memory model and the physical one are > > synchronized? What kind of problems can occur in this phase? > > The Xen approach is to avoid this by only giving one device to each > machine. > > > - Are specific usage patterns used in real world > implementations (e.g. > > VMWare) to simplify the virtualization (model or merge phase)? > > This is probably the wrong list to ask detailed questions about how > VMWare works... ;-) > > > - Do you have any interesting pointers to literature dealing > > with full I/O > > virtualization? In particular, how does VMWare''s full > virtualization > > works with respect to I/O? > > Again, wrong list for VMWare questions. > > > - Is every device time partitionable? If not, which > > requirements does it > > have to meet to be time partitionable? > > Certainly not - I would say that almost all devices are NOT time > partitionable, as the state in the device is dependant on the current > usage. The more complex the device is, the more likely it is to have > difficulties, but even such a simple deevice as a serial port would > struggle to work in a time-shared fashion (not to mention that serial > ports generally are used for multiple transactions to make a whole > "bigger picture transaction", so for example a web-server > connected via > a serial modem would send a packet of several hundred bytes to the > serial port driver, which is then portioned out as and when the serial > port is ready to send another few bytes. If you switch from > one guest to > another during this process, and the second guest also has > something to > send on the serial port, you''d end up with a very scrambled > message from > the first guest and quite likely the second guests message completely > lost!). > > There are some devices that are specifically built to manage multiple > hosts, but other than that, any sharing of a device requires some > software to gather up "a full transaction" and then sending > that to the > actual hardware, often also waiting for the transaction to > complete (for > example the interrupt signal to say that the hard disk write is > complete). > > > > -> I don''t think every device is. What about a device > which supports > > different modes of operation. If two VMs drive the > > virtual device in > > different modes, it may not be possible to constantly > > switch between > > them. Ok, this is pretty artificial. > > A particular problem is devices where you can''t necessarily read back > the last mode-setting, which may well be the case in many different > devices. You can''t, for example, read back all the registers on an IDE > device, because the read of a particular address amy give the status > rather than the current comamnd sent, or some such. > > -- > Mats > > > > Thanks a lot for your help! > > > > > > Best wishes > > > > Thomas > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > > > > > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi Mats, This para-virtualized driver in HVM domain is just like the dummy device driver in para-virtualized domain. And after using this para-virtualized driver in HVM domain, HVM doamin is also using this kind of front-end/back-end model to handle I/O instead of using "device model" which a typical HVM domain will use. Am I correct? Liang ----- Original Message ----- From: "Petersson, Mats" <Mats.Petersson@amd.com> To: "Liang Yang" <multisyncfe991@hotmail.com> Cc: "Thomas Heinz" <thomasheinz@gmx.net>; <xen-devel@lists.xensource.com> Sent: Wednesday, November 22, 2006 9:57 AM Subject: RE: [Xen-devel] Full virtualization and I/O> -----Original Message----- > From: Liang Yang [mailto:multisyncfe991@hotmail.com] > Sent: 22 November 2006 16:51 > To: Petersson, Mats > Cc: Thomas Heinz; xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] Full virtualization and I/O > > Hi Mats, > > Thanks for your explanation in such details. > > As you mentioned in your post, could you elaborate using > unmodified driver > in HVM domain (i.e. using front-end driver in > full-virtualized domain)? Do > you think para-virtualized domain will have exactly the same > behavior as > full-virtualized domain when both of them are using this > unmodified driver > to access virtual block devices?Not sure exactly what you''re asking, but if you''re asking if the performance of driver-related work will be approximately the same, yes. By the way, I wouldn''t call that an "unmodified" driver - it is definitely a MODIFIED driver (a para-virtual driver). -- Mats> > Best regards, > > Liang > > ----- Original Message ----- > From: "Petersson, Mats" <Mats.Petersson@amd.com> > To: "Thomas Heinz" <thomasheinz@gmx.net>; > <xen-devel@lists.xensource.com> > Sent: Wednesday, November 22, 2006 9:24 AM > Subject: RE: [Xen-devel] Full virtualization and I/O > > > > -----Original Message----- > > From: xen-devel-bounces@lists.xensource.com > > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of > > Thomas Heinz > > Sent: 20 November 2006 23:39 > > To: xen-devel@lists.xensource.com > > Subject: [Xen-devel] Full virtualization and I/O > > > > Hi > > > > Full virtualization is about providing multiple virtual ISA level > > environments and mapping them to a single physical one. One > > particular > > aspect of this mapping are I/O instructions (explicit or > > mmapped I/O). In > > general, there are two strategies to partition the devices, > > either in time > > or in space. Partitioning a device in space means that the > > device (or a > > part of it) is exclusively available to a single VM. > > Partitioning a device > > in time (or time multiplexing) means that it can be used by > > multiple VMs > > but only one VM may use it at any point in time. > > The Xen approach is to not allow any sharing of devices, a device is > owned by one domain, no other domain can directly access the device. > There is a protocol of so called frontend/backend driver which is > basically a dummy-device that forwards a request to another domain > (normally domain 0) and the other half of the driver-pair is > picking up > this data, forwards it to some processing task, that then sends the > packet onto the real hardware. > > For fully virtualized mode (hardware supported virtual > machine, such as > AMD-V or Intel VT, aka HVM), there is a different model, > where a "device > model" is involved to perform the hardware modelling. In Xen, this is > using a modified version of qemu (called qemu-dm), which has a fairly > complete set of "hardware" in it''s model. It''s got for example IDE > controller, several types of network devices, graphics and > mouse/keyboard models. The things you''d usually find in a PC, that is. > The way it works is that the hypervisor intercepts IOIO and memory > mapped IO regions that match the devices involved (such as the > A0000-BFFFF region for VGA frame buffer memory or the 0x1F0-0x1F7 IO > ports for the IDE controller), and forwards a request from the > hypervisor to qemu-dm, where the operation changes the current state, > and when it''s necessary, the state-change will result in for example a > read-request to the "hard-disk" (which may be a real disk, a file on a > local disk, or a file on a network storage device, to give some > examples). > > There is also the option of using the frontend drivers as described > above in the fully virtualized model. > > Finally, while I''m on the subject of fully virtualized mode: It is > currently not possible to give a DMA-based device to a > fully-virtualized > domain. The reason for this is that the guest OS will have been told > that memory is from 0..256MB (say), and it''s actual machine physical > address is at 256MB..512MB. The OS is completely unaware of this > "mismatch". So the OS will perform some operation to take a virtual > address of some buffer (say a network packet) and make it into a > "physical address", which will be an address in the range of 0..256MB. > This will of course (at least) lead to the wrong data being > transmitted, > as the address of the actual data is somewhere in the range > 256MB..512MB. The only solution to this is to have an IOMMU, which can > translate the guest''s understanding of a physical address > (0..256MB) to > a machine physical address (256..512MB). > > > > > I am trying to understand how I/O virtualization on the ISA > > level works if > > a device is shared between multiple VM instances. On a very > > high level, it > > should be as follows. First of all, the VMM has to intercept > > the VM''s I/O > > commands (I/O instructions or load/store to dedicated memory > > addresses - > > let''s ignore interrupts for the moment). This could be done > > by traps or by > > replacing the resp. instructions by VMM calls to I/O > > primitives. The VMM > > keeps multiple device model instances (one for each VM using > > the device) > > in memory. The models somehow reflect the low level I/O API > > of the device. > > Depending on which I/O command is issued by the VM, either > the memory > > model is changed or a number of I/O instructions are executed > > to make the > > physical device state reflect the one represented in the > memory model. > > Do you by ISA mean "Instruction Set Architecture" or something else (I > presume it''s NOT meaning ISA-bus...)? > > Intercepting IOIO instructions or MMIO instructions is not that hard - > in HVM the two processor architectures have specific intercepts and > bitmaps to indicate which IO instructions should be intercepted. MMIO > will require the page-tables to be set up such that the memory mapped > region is mapped "not present" so that any operation to this region > gives a page-fault, and then the page-fault is analyzed to see if it''s > for a MMIO address or for a "real page fault". > > For para-virtualization, the model is similar, but the exact model of > how to intercept the IOIO or MMIO instruction is slightly different - > but in essence it''s the same principle. Let me know if you really need > to know how Xen goes about doing this, as it''s quite complicated (more > so than the HVM version, for sure). > > > > > > This approach brings up a number of questions. It would be > > great if some of > > the virtualization experts here could shed some light on them > > (even though > > they are not immediately related to Xen, I know): > > > > - How do these device memory models look like? Is there a common > > (automata) theory behind or are they done ad hoc? > > Not sure what you''re asking for here. Since the devices are either > modeled after a REAL device (qemu-dm) and as such will resemble as > closely as possible the REAL hardware device that it''s > emulating, or in > the frontend/backend driver, there is an "idealized model", such that > the request contains just the basic data that the OS provides normally > to the driver, and it''s placed in a queue with a message-signaling > system to tell the other side that it''s got something in the queue. > > > - What kind of strategies/algorithms are used in the merge > > phase, i.e. the > > phase where the virtual memory model and the physical one are > > synchronized? What kind of problems can occur in this phase? > > The Xen approach is to avoid this by only giving one device to each > machine. > > > - Are specific usage patterns used in real world > implementations (e.g. > > VMWare) to simplify the virtualization (model or merge phase)? > > This is probably the wrong list to ask detailed questions about how > VMWare works... ;-) > > > - Do you have any interesting pointers to literature dealing > > with full I/O > > virtualization? In particular, how does VMWare''s full > virtualization > > works with respect to I/O? > > Again, wrong list for VMWare questions. > > > - Is every device time partitionable? If not, which > > requirements does it > > have to meet to be time partitionable? > > Certainly not - I would say that almost all devices are NOT time > partitionable, as the state in the device is dependant on the current > usage. The more complex the device is, the more likely it is to have > difficulties, but even such a simple deevice as a serial port would > struggle to work in a time-shared fashion (not to mention that serial > ports generally are used for multiple transactions to make a whole > "bigger picture transaction", so for example a web-server > connected via > a serial modem would send a packet of several hundred bytes to the > serial port driver, which is then portioned out as and when the serial > port is ready to send another few bytes. If you switch from > one guest to > another during this process, and the second guest also has > something to > send on the serial port, you''d end up with a very scrambled > message from > the first guest and quite likely the second guests message completely > lost!). > > There are some devices that are specifically built to manage multiple > hosts, but other than that, any sharing of a device requires some > software to gather up "a full transaction" and then sending > that to the > actual hardware, often also waiting for the transaction to > complete (for > example the interrupt signal to say that the hard disk write is > complete). > > > > -> I don''t think every device is. What about a device > which supports > > different modes of operation. If two VMs drive the > > virtual device in > > different modes, it may not be possible to constantly > > switch between > > them. Ok, this is pretty artificial. > > A particular problem is devices where you can''t necessarily read back > the last mode-setting, which may well be the case in many different > devices. You can''t, for example, read back all the registers on an IDE > device, because the read of a particular address amy give the status > rather than the current comamnd sent, or some such. > > -- > Mats > > > > Thanks a lot for your help! > > > > > > Best wishes > > > > Thomas > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > > > > > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message----- > From: xen-devel-bounces@lists.xensource.com > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Liang Yang > Sent: 22 November 2006 17:17 > To: Petersson, Mats > Cc: xen-devel@lists.xensource.com; Thomas Heinz > Subject: Re: [Xen-devel] Full virtualization and I/O > > Hi Mats, > > This para-virtualized driver in HVM domain is just like the > dummy device > driver in para-virtualized domain. And after using this > para-virtualized > driver in HVM domain, HVM doamin is also using this kind of > front-end/back-end model to handle I/O instead of using > "device model" which > a typical HVM domain will use. > > Am I correct?Yes, exactly. Of course, the HVM domain may well use a mixture, say for example using the normal (device-model) IDE device driver to access the disk, and a para-virtual network driver to access the network. -- Mats> > Liang > > ----- Original Message ----- > From: "Petersson, Mats" <Mats.Petersson@amd.com> > To: "Liang Yang" <multisyncfe991@hotmail.com> > Cc: "Thomas Heinz" <thomasheinz@gmx.net>; > <xen-devel@lists.xensource.com> > Sent: Wednesday, November 22, 2006 9:57 AM > Subject: RE: [Xen-devel] Full virtualization and I/O > > > > > > -----Original Message----- > > From: Liang Yang [mailto:multisyncfe991@hotmail.com] > > Sent: 22 November 2006 16:51 > > To: Petersson, Mats > > Cc: Thomas Heinz; xen-devel@lists.xensource.com > > Subject: Re: [Xen-devel] Full virtualization and I/O > > > > Hi Mats, > > > > Thanks for your explanation in such details. > > > > As you mentioned in your post, could you elaborate using > > unmodified driver > > in HVM domain (i.e. using front-end driver in > > full-virtualized domain)? Do > > you think para-virtualized domain will have exactly the same > > behavior as > > full-virtualized domain when both of them are using this > > unmodified driver > > to access virtual block devices? > > Not sure exactly what you''re asking, but if you''re asking if the > performance of driver-related work will be approximately the > same, yes. > > By the way, I wouldn''t call that an "unmodified" driver - it is > definitely a MODIFIED driver (a para-virtual driver). > > -- > Mats > > > > Best regards, > > > > Liang > > > > ----- Original Message ----- > > From: "Petersson, Mats" <Mats.Petersson@amd.com> > > To: "Thomas Heinz" <thomasheinz@gmx.net>; > > <xen-devel@lists.xensource.com> > > Sent: Wednesday, November 22, 2006 9:24 AM > > Subject: RE: [Xen-devel] Full virtualization and I/O > > > > > > > -----Original Message----- > > > From: xen-devel-bounces@lists.xensource.com > > > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of > > > Thomas Heinz > > > Sent: 20 November 2006 23:39 > > > To: xen-devel@lists.xensource.com > > > Subject: [Xen-devel] Full virtualization and I/O > > > > > > Hi > > > > > > Full virtualization is about providing multiple virtual ISA level > > > environments and mapping them to a single physical one. One > > > particular > > > aspect of this mapping are I/O instructions (explicit or > > > mmapped I/O). In > > > general, there are two strategies to partition the devices, > > > either in time > > > or in space. Partitioning a device in space means that the > > > device (or a > > > part of it) is exclusively available to a single VM. > > > Partitioning a device > > > in time (or time multiplexing) means that it can be used by > > > multiple VMs > > > but only one VM may use it at any point in time. > > > > The Xen approach is to not allow any sharing of devices, a device is > > owned by one domain, no other domain can directly access the device. > > There is a protocol of so called frontend/backend driver which is > > basically a dummy-device that forwards a request to another domain > > (normally domain 0) and the other half of the driver-pair is > > picking up > > this data, forwards it to some processing task, that then sends the > > packet onto the real hardware. > > > > For fully virtualized mode (hardware supported virtual > > machine, such as > > AMD-V or Intel VT, aka HVM), there is a different model, > > where a "device > > model" is involved to perform the hardware modelling. In > Xen, this is > > using a modified version of qemu (called qemu-dm), which > has a fairly > > complete set of "hardware" in it''s model. It''s got for example IDE > > controller, several types of network devices, graphics and > > mouse/keyboard models. The things you''d usually find in a > PC, that is. > > The way it works is that the hypervisor intercepts IOIO and memory > > mapped IO regions that match the devices involved (such as the > > A0000-BFFFF region for VGA frame buffer memory or the 0x1F0-0x1F7 IO > > ports for the IDE controller), and forwards a request from the > > hypervisor to qemu-dm, where the operation changes the > current state, > > and when it''s necessary, the state-change will result in > for example a > > read-request to the "hard-disk" (which may be a real disk, > a file on a > > local disk, or a file on a network storage device, to give some > > examples). > > > > There is also the option of using the frontend drivers as described > > above in the fully virtualized model. > > > > Finally, while I''m on the subject of fully virtualized mode: It is > > currently not possible to give a DMA-based device to a > > fully-virtualized > > domain. The reason for this is that the guest OS will have been told > > that memory is from 0..256MB (say), and it''s actual machine physical > > address is at 256MB..512MB. The OS is completely unaware of this > > "mismatch". So the OS will perform some operation to take a virtual > > address of some buffer (say a network packet) and make it into a > > "physical address", which will be an address in the range > of 0..256MB. > > This will of course (at least) lead to the wrong data being > > transmitted, > > as the address of the actual data is somewhere in the range > > 256MB..512MB. The only solution to this is to have an > IOMMU, which can > > translate the guest''s understanding of a physical address > > (0..256MB) to > > a machine physical address (256..512MB). > > > > > > > > I am trying to understand how I/O virtualization on the ISA > > > level works if > > > a device is shared between multiple VM instances. On a very > > > high level, it > > > should be as follows. First of all, the VMM has to intercept > > > the VM''s I/O > > > commands (I/O instructions or load/store to dedicated memory > > > addresses - > > > let''s ignore interrupts for the moment). This could be done > > > by traps or by > > > replacing the resp. instructions by VMM calls to I/O > > > primitives. The VMM > > > keeps multiple device model instances (one for each VM using > > > the device) > > > in memory. The models somehow reflect the low level I/O API > > > of the device. > > > Depending on which I/O command is issued by the VM, either > > the memory > > > model is changed or a number of I/O instructions are executed > > > to make the > > > physical device state reflect the one represented in the > > memory model. > > > > Do you by ISA mean "Instruction Set Architecture" or > something else (I > > presume it''s NOT meaning ISA-bus...)? > > > > Intercepting IOIO instructions or MMIO instructions is not > that hard - > > in HVM the two processor architectures have specific intercepts and > > bitmaps to indicate which IO instructions should be > intercepted. MMIO > > will require the page-tables to be set up such that the > memory mapped > > region is mapped "not present" so that any operation to this region > > gives a page-fault, and then the page-fault is analyzed to > see if it''s > > for a MMIO address or for a "real page fault". > > > > For para-virtualization, the model is similar, but the > exact model of > > how to intercept the IOIO or MMIO instruction is slightly > different - > > but in essence it''s the same principle. Let me know if you > really need > > to know how Xen goes about doing this, as it''s quite > complicated (more > > so than the HVM version, for sure). > > > > > > > > > > This approach brings up a number of questions. It would be > > > great if some of > > > the virtualization experts here could shed some light on them > > > (even though > > > they are not immediately related to Xen, I know): > > > > > > - How do these device memory models look like? Is there a common > > > (automata) theory behind or are they done ad hoc? > > > > Not sure what you''re asking for here. Since the devices are either > > modeled after a REAL device (qemu-dm) and as such will resemble as > > closely as possible the REAL hardware device that it''s > > emulating, or in > > the frontend/backend driver, there is an "idealized model", > such that > > the request contains just the basic data that the OS > provides normally > > to the driver, and it''s placed in a queue with a message-signaling > > system to tell the other side that it''s got something in the queue. > > > > > - What kind of strategies/algorithms are used in the merge > > > phase, i.e. the > > > phase where the virtual memory model and the physical one are > > > synchronized? What kind of problems can occur in this phase? > > > > The Xen approach is to avoid this by only giving one device to each > > machine. > > > > > - Are specific usage patterns used in real world > > implementations (e.g. > > > VMWare) to simplify the virtualization (model or merge phase)? > > > > This is probably the wrong list to ask detailed questions about how > > VMWare works... ;-) > > > > > - Do you have any interesting pointers to literature dealing > > > with full I/O > > > virtualization? In particular, how does VMWare''s full > > virtualization > > > works with respect to I/O? > > > > Again, wrong list for VMWare questions. > > > > > - Is every device time partitionable? If not, which > > > requirements does it > > > have to meet to be time partitionable? > > > > Certainly not - I would say that almost all devices are NOT time > > partitionable, as the state in the device is dependant on > the current > > usage. The more complex the device is, the more likely it is to have > > difficulties, but even such a simple deevice as a serial port would > > struggle to work in a time-shared fashion (not to mention > that serial > > ports generally are used for multiple transactions to make a whole > > "bigger picture transaction", so for example a web-server > > connected via > > a serial modem would send a packet of several hundred bytes to the > > serial port driver, which is then portioned out as and when > the serial > > port is ready to send another few bytes. If you switch from > > one guest to > > another during this process, and the second guest also has > > something to > > send on the serial port, you''d end up with a very scrambled > > message from > > the first guest and quite likely the second guests message > completely > > lost!). > > > > There are some devices that are specifically built to > manage multiple > > hosts, but other than that, any sharing of a device requires some > > software to gather up "a full transaction" and then sending > > that to the > > actual hardware, often also waiting for the transaction to > > complete (for > > example the interrupt signal to say that the hard disk write is > > complete). > > > > > > > -> I don''t think every device is. What about a device > > which supports > > > different modes of operation. If two VMs drive the > > > virtual device in > > > different modes, it may not be possible to constantly > > > switch between > > > them. Ok, this is pretty artificial. > > > > A particular problem is devices where you can''t necessarily > read back > > the last mode-setting, which may well be the case in many different > > devices. You can''t, for example, read back all the > registers on an IDE > > device, because the read of a particular address amy give the status > > rather than the current comamnd sent, or some such. > > > > -- > > Mats > > > > > > Thanks a lot for your help! > > > > > > > > > Best wishes > > > > > > Thomas > > > > > > _______________________________________________ > > > Xen-devel mailing list > > > Xen-devel@lists.xensource.com > > > http://lists.xensource.com/xen-devel > > > > > > > > > > > > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > > > > > > > > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi Mats, The problem using QEMU device mode is it can only support four disks at most as IDE can only support two master/slave pair devices. Do you know if this limitation will be gone in the future in QEMU, e.g. emulating SCSI disks instead of IDE devices, so it can support much more than four block devices? Thanks, Liang ----- Original Message ----- From: "Petersson, Mats" <Mats.Petersson@amd.com> To: "Liang Yang" <multisyncfe991@hotmail.com> Cc: <xen-devel@lists.xensource.com>; "Thomas Heinz" <thomasheinz@gmx.net> Sent: Wednesday, November 22, 2006 10:22 AM Subject: RE: [Xen-devel] Full virtualization and I/O> -----Original Message----- > From: xen-devel-bounces@lists.xensource.com > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Liang Yang > Sent: 22 November 2006 17:17 > To: Petersson, Mats > Cc: xen-devel@lists.xensource.com; Thomas Heinz > Subject: Re: [Xen-devel] Full virtualization and I/O > > Hi Mats, > > This para-virtualized driver in HVM domain is just like the > dummy device > driver in para-virtualized domain. And after using this > para-virtualized > driver in HVM domain, HVM doamin is also using this kind of > front-end/back-end model to handle I/O instead of using > "device model" which > a typical HVM domain will use. > > Am I correct?Yes, exactly. Of course, the HVM domain may well use a mixture, say for example using the normal (device-model) IDE device driver to access the disk, and a para-virtual network driver to access the network. -- Mats> > Liang > > ----- Original Message ----- > From: "Petersson, Mats" <Mats.Petersson@amd.com> > To: "Liang Yang" <multisyncfe991@hotmail.com> > Cc: "Thomas Heinz" <thomasheinz@gmx.net>; > <xen-devel@lists.xensource.com> > Sent: Wednesday, November 22, 2006 9:57 AM > Subject: RE: [Xen-devel] Full virtualization and I/O > > > > > > -----Original Message----- > > From: Liang Yang [mailto:multisyncfe991@hotmail.com] > > Sent: 22 November 2006 16:51 > > To: Petersson, Mats > > Cc: Thomas Heinz; xen-devel@lists.xensource.com > > Subject: Re: [Xen-devel] Full virtualization and I/O > > > > Hi Mats, > > > > Thanks for your explanation in such details. > > > > As you mentioned in your post, could you elaborate using > > unmodified driver > > in HVM domain (i.e. using front-end driver in > > full-virtualized domain)? Do > > you think para-virtualized domain will have exactly the same > > behavior as > > full-virtualized domain when both of them are using this > > unmodified driver > > to access virtual block devices? > > Not sure exactly what you''re asking, but if you''re asking if the > performance of driver-related work will be approximately the > same, yes. > > By the way, I wouldn''t call that an "unmodified" driver - it is > definitely a MODIFIED driver (a para-virtual driver). > > -- > Mats > > > > Best regards, > > > > Liang > > > > ----- Original Message ----- > > From: "Petersson, Mats" <Mats.Petersson@amd.com> > > To: "Thomas Heinz" <thomasheinz@gmx.net>; > > <xen-devel@lists.xensource.com> > > Sent: Wednesday, November 22, 2006 9:24 AM > > Subject: RE: [Xen-devel] Full virtualization and I/O > > > > > > > -----Original Message----- > > > From: xen-devel-bounces@lists.xensource.com > > > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of > > > Thomas Heinz > > > Sent: 20 November 2006 23:39 > > > To: xen-devel@lists.xensource.com > > > Subject: [Xen-devel] Full virtualization and I/O > > > > > > Hi > > > > > > Full virtualization is about providing multiple virtual ISA level > > > environments and mapping them to a single physical one. One > > > particular > > > aspect of this mapping are I/O instructions (explicit or > > > mmapped I/O). In > > > general, there are two strategies to partition the devices, > > > either in time > > > or in space. Partitioning a device in space means that the > > > device (or a > > > part of it) is exclusively available to a single VM. > > > Partitioning a device > > > in time (or time multiplexing) means that it can be used by > > > multiple VMs > > > but only one VM may use it at any point in time. > > > > The Xen approach is to not allow any sharing of devices, a device is > > owned by one domain, no other domain can directly access the device. > > There is a protocol of so called frontend/backend driver which is > > basically a dummy-device that forwards a request to another domain > > (normally domain 0) and the other half of the driver-pair is > > picking up > > this data, forwards it to some processing task, that then sends the > > packet onto the real hardware. > > > > For fully virtualized mode (hardware supported virtual > > machine, such as > > AMD-V or Intel VT, aka HVM), there is a different model, > > where a "device > > model" is involved to perform the hardware modelling. In > Xen, this is > > using a modified version of qemu (called qemu-dm), which > has a fairly > > complete set of "hardware" in it''s model. It''s got for example IDE > > controller, several types of network devices, graphics and > > mouse/keyboard models. The things you''d usually find in a > PC, that is. > > The way it works is that the hypervisor intercepts IOIO and memory > > mapped IO regions that match the devices involved (such as the > > A0000-BFFFF region for VGA frame buffer memory or the 0x1F0-0x1F7 IO > > ports for the IDE controller), and forwards a request from the > > hypervisor to qemu-dm, where the operation changes the > current state, > > and when it''s necessary, the state-change will result in > for example a > > read-request to the "hard-disk" (which may be a real disk, > a file on a > > local disk, or a file on a network storage device, to give some > > examples). > > > > There is also the option of using the frontend drivers as described > > above in the fully virtualized model. > > > > Finally, while I''m on the subject of fully virtualized mode: It is > > currently not possible to give a DMA-based device to a > > fully-virtualized > > domain. The reason for this is that the guest OS will have been told > > that memory is from 0..256MB (say), and it''s actual machine physical > > address is at 256MB..512MB. The OS is completely unaware of this > > "mismatch". So the OS will perform some operation to take a virtual > > address of some buffer (say a network packet) and make it into a > > "physical address", which will be an address in the range > of 0..256MB. > > This will of course (at least) lead to the wrong data being > > transmitted, > > as the address of the actual data is somewhere in the range > > 256MB..512MB. The only solution to this is to have an > IOMMU, which can > > translate the guest''s understanding of a physical address > > (0..256MB) to > > a machine physical address (256..512MB). > > > > > > > > I am trying to understand how I/O virtualization on the ISA > > > level works if > > > a device is shared between multiple VM instances. On a very > > > high level, it > > > should be as follows. First of all, the VMM has to intercept > > > the VM''s I/O > > > commands (I/O instructions or load/store to dedicated memory > > > addresses - > > > let''s ignore interrupts for the moment). This could be done > > > by traps or by > > > replacing the resp. instructions by VMM calls to I/O > > > primitives. The VMM > > > keeps multiple device model instances (one for each VM using > > > the device) > > > in memory. The models somehow reflect the low level I/O API > > > of the device. > > > Depending on which I/O command is issued by the VM, either > > the memory > > > model is changed or a number of I/O instructions are executed > > > to make the > > > physical device state reflect the one represented in the > > memory model. > > > > Do you by ISA mean "Instruction Set Architecture" or > something else (I > > presume it''s NOT meaning ISA-bus...)? > > > > Intercepting IOIO instructions or MMIO instructions is not > that hard - > > in HVM the two processor architectures have specific intercepts and > > bitmaps to indicate which IO instructions should be > intercepted. MMIO > > will require the page-tables to be set up such that the > memory mapped > > region is mapped "not present" so that any operation to this region > > gives a page-fault, and then the page-fault is analyzed to > see if it''s > > for a MMIO address or for a "real page fault". > > > > For para-virtualization, the model is similar, but the > exact model of > > how to intercept the IOIO or MMIO instruction is slightly > different - > > but in essence it''s the same principle. Let me know if you > really need > > to know how Xen goes about doing this, as it''s quite > complicated (more > > so than the HVM version, for sure). > > > > > > > > > > This approach brings up a number of questions. It would be > > > great if some of > > > the virtualization experts here could shed some light on them > > > (even though > > > they are not immediately related to Xen, I know): > > > > > > - How do these device memory models look like? Is there a common > > > (automata) theory behind or are they done ad hoc? > > > > Not sure what you''re asking for here. Since the devices are either > > modeled after a REAL device (qemu-dm) and as such will resemble as > > closely as possible the REAL hardware device that it''s > > emulating, or in > > the frontend/backend driver, there is an "idealized model", > such that > > the request contains just the basic data that the OS > provides normally > > to the driver, and it''s placed in a queue with a message-signaling > > system to tell the other side that it''s got something in the queue. > > > > > - What kind of strategies/algorithms are used in the merge > > > phase, i.e. the > > > phase where the virtual memory model and the physical one are > > > synchronized? What kind of problems can occur in this phase? > > > > The Xen approach is to avoid this by only giving one device to each > > machine. > > > > > - Are specific usage patterns used in real world > > implementations (e.g. > > > VMWare) to simplify the virtualization (model or merge phase)? > > > > This is probably the wrong list to ask detailed questions about how > > VMWare works... ;-) > > > > > - Do you have any interesting pointers to literature dealing > > > with full I/O > > > virtualization? In particular, how does VMWare''s full > > virtualization > > > works with respect to I/O? > > > > Again, wrong list for VMWare questions. > > > > > - Is every device time partitionable? If not, which > > > requirements does it > > > have to meet to be time partitionable? > > > > Certainly not - I would say that almost all devices are NOT time > > partitionable, as the state in the device is dependant on > the current > > usage. The more complex the device is, the more likely it is to have > > difficulties, but even such a simple deevice as a serial port would > > struggle to work in a time-shared fashion (not to mention > that serial > > ports generally are used for multiple transactions to make a whole > > "bigger picture transaction", so for example a web-server > > connected via > > a serial modem would send a packet of several hundred bytes to the > > serial port driver, which is then portioned out as and when > the serial > > port is ready to send another few bytes. If you switch from > > one guest to > > another during this process, and the second guest also has > > something to > > send on the serial port, you''d end up with a very scrambled > > message from > > the first guest and quite likely the second guests message > completely > > lost!). > > > > There are some devices that are specifically built to > manage multiple > > hosts, but other than that, any sharing of a device requires some > > software to gather up "a full transaction" and then sending > > that to the > > actual hardware, often also waiting for the transaction to > > complete (for > > example the interrupt signal to say that the hard disk write is > > complete). > > > > > > > -> I don''t think every device is. What about a device > > which supports > > > different modes of operation. If two VMs drive the > > > virtual device in > > > different modes, it may not be possible to constantly > > > switch between > > > them. Ok, this is pretty artificial. > > > > A particular problem is devices where you can''t necessarily > read back > > the last mode-setting, which may well be the case in many different > > devices. You can''t, for example, read back all the > registers on an IDE > > device, because the read of a particular address amy give the status > > rather than the current comamnd sent, or some such. > > > > -- > > Mats > > > > > > Thanks a lot for your help! > > > > > > > > > Best wishes > > > > > > Thomas > > > > > > _______________________________________________ > > > Xen-devel mailing list > > > Xen-devel@lists.xensource.com > > > http://lists.xensource.com/xen-devel > > > > > > > > > > > > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > > > > > > > > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Wed, 22 Nov 2006 10:36:42 -0700 "Liang Yang" <multisyncfe991@hotmail.com> wrote:> Hi Mats, > > The problem using QEMU device mode is it can only support four disks at most > as IDE can only support two master/slave pair devices.IDE can support lots more than this. Alan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Puthiyaparambil, Aravindh
2006-Nov-22 18:33 UTC
RE: [Xen-devel] Full virtualization and I/O
Mats,> For fully virtualized mode (hardware supported virtual machine, suchas> Finally, while I''m on the subject of fully virtualized mode: It is > currently not possible to give a DMA-based device to afully-virtualized> domain. The reason for this is that the guest OS will have been told > that memory is from 0..256MB (say), and it''s actual machine physical > address is at 256MB..512MB. The OS is completely unaware of this > "mismatch". So the OS will perform some operation to take a virtual > address of some buffer (say a network packet) and make it into a > "physical address", which will be an address in the range of 0..256MB. > This will of course (at least) lead to the wrong data beingtransmitted,> as the address of the actual data is somewhere in the range > 256MB..512MB. The only solution to this is to have an IOMMU, which can > translate the guest''s understanding of a physical address (0..256MB)to> a machine physical address (256..512MB).I know that individual domains can be given direct access to individual PCI devices. http://www.cl.cam.ac.uk/research/srg/netos/xen/readmes/user/user.html#SE CTION03230000000000000000 Is this not possible with HVM domains? Is it possible for HVM domains to be a PCI frontend and receive a PCI device which is "hidden" from Dom0? Or is this what paravirtualized drivers for HVM domains are doing? Thanks, Aravindh _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message----- > From: xen-devel-bounces@lists.xensource.com > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of > Puthiyaparambil, Aravindh > Sent: 22 November 2006 18:34 > To: Petersson, Mats; Thomas Heinz; xen-devel@lists.xensource.com > Subject: RE: [Xen-devel] Full virtualization and I/O > > Mats, > > > For fully virtualized mode (hardware supported virtual machine, such > as > > Finally, while I''m on the subject of fully virtualized mode: It is > > currently not possible to give a DMA-based device to a > fully-virtualized > > domain. The reason for this is that the guest OS will have been told > > that memory is from 0..256MB (say), and it''s actual machine physical > > address is at 256MB..512MB. The OS is completely unaware of this > > "mismatch". So the OS will perform some operation to take a virtual > > address of some buffer (say a network packet) and make it into a > > "physical address", which will be an address in the range > of 0..256MB. > > This will of course (at least) lead to the wrong data being > transmitted, > > as the address of the actual data is somewhere in the range > > 256MB..512MB. The only solution to this is to have an > IOMMU, which can > > translate the guest''s understanding of a physical address (0..256MB) > to > > a machine physical address (256..512MB). > > I know that individual domains can be given direct access to > individual > PCI devices. > > http://www.cl.cam.ac.uk/research/srg/netos/xen/readmes/user/us > er.html#SE > CTION03230000000000000000 > > Is this not possible with HVM domains? Is it possible for HVM > domains to > be a PCI frontend and receive a PCI device which is "hidden" > from Dom0? > Or is this what paravirtualized drivers for HVM domains are doing?No, PV-on-HVM drivers are using the "standard" front/back-end driver structures. You could write a Virtualization-Aware driver for HVM. That would require replacing any "virtual-to-physical" translations from the native OS driver version to a "virtualization aware" model that calls the hypervisor to translate the data. Aside from that, all you need is a way to inform the HVM domain of the existance of that PCI device, which shouldn''t be a major deal. -- Mats> > Thanks, > Aravindh > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi Mats Thanks a lot for your detailed reply! You wrote:> For fully virtualized mode (hardware supported virtual machine, such as > AMD-V or Intel VT, aka HVM), there is a different model, where a "device > model" is involved to perform the hardware modelling. In Xen, this is > using a modified version of qemu (called qemu-dm), which has a fairly > complete set of "hardware" in it''s model. It''s got for example IDE > controller, several types of network devices, graphics and > mouse/keyboard models. The things you''d usually find in a PC, that is. > The way it works is that the hypervisor intercepts IOIO and memory > mapped IO regions that match the devices involved (such as the > A0000-BFFFF region for VGA frame buffer memory or the 0x1F0-0x1F7 IO > ports for the IDE controller), and forwards a request from the > hypervisor to qemu-dm, where the operation changes the current state, > and when it''s necessary, the state-change will result in for example a > read-request to the "hard-disk" (which may be a real disk, a file on a > local disk, or a file on a network storage device, to give some > examples).This is very interesting. So qemu models the low level device interface (I/O interface) in software and translates I/O actions to either model changes or to library or system calls (since QEMU runs as normal process). Is there any documentation about this or is the source the doc ;)> Do you by ISA mean "Instruction Set Architecture" or something else (I > presume it''s NOT meaning ISA-bus...)?Yes, I mean instruction set architecture.> Intercepting IOIO instructions or MMIO instructions is not that hard - > in HVM the two processor architectures have specific intercepts and > bitmaps to indicate which IO instructions should be intercepted. MMIO > will require the page-tables to be set up such that the memory mapped > region is mapped "not present" so that any operation to this region > gives a page-fault, and then the page-fault is analyzed to see if it''s > for a MMIO address or for a "real page fault". > > For para-virtualization, the model is similar, but the exact model of > how to intercept the IOIO or MMIO instruction is slightly different - > but in essence it''s the same principle. Let me know if you really need > to know how Xen goes about doing this, as it''s quite complicated (more > so than the HVM version, for sure).Although it is interesting to see how interception works in detail, I am currently more interested in how device state is modelled and translated into system/library calls or sequences of I/O instructions. So, in fact the operation after the interception has taken place.> Not sure what you''re asking for here. Since the devices are either > modeled after a REAL device (qemu-dm) and as such will resemble as > closely as possible the REAL hardware device that it''s emulating, or in > the frontend/backend driver, there is an "idealized model", such that > the request contains just the basic data that the OS provides normally > to the driver, and it''s placed in a queue with a message-signaling > system to tell the other side that it''s got something in the queue.I am basically asking about general/theoretical concepts behind device modelling as e.g. done by qemu. I think it''s a good idea to understand how qemu actually does this.> Certainly not - I would say that almost all devices are NOT time > partitionable, as the state in the device is dependant on the current > usage. The more complex the device is, the more likely it is to have > difficulties, but even such a simple deevice as a serial port would > struggle to work in a time-shared fashion (not to mention that serial > ports generally are used for multiple transactions to make a whole > "bigger picture transaction", so for example a web-server connected via > a serial modem would send a packet of several hundred bytes to the > serial port driver, which is then portioned out as and when the serial > port is ready to send another few bytes. If you switch from one guest to > another during this process, and the second guest also has something to > send on the serial port, you''d end up with a very scrambled message from > the first guest and quite likely the second guests message completely > lost!).Very nice example. Clearly, high level driver interfaces (e.g. send/receive, read/write) can be designed in a way that time-sharing is possible, e.g. using message/transaction queues. On the I/O level, it is likely to be harder to reconstruct the "full transaction". It might also be necessary to make assumptions about the actual guest, i.e. the way the device is being used.> A particular problem is devices where you can''t necessarily read back > the last mode-setting, which may well be the case in many different > devices. You can''t, for example, read back all the registers on an IDE > device, because the read of a particular address amy give the status > rather than the current comamnd sent, or some such.This could be stored in memory when you have a virtual (in-memory) device model. Best wishes Thomas _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message----- > From: xen-devel-bounces@lists.xensource.com > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of > Thomas Heinz > Sent: 23 November 2006 16:23 > To: Petersson, Mats > Cc: xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] Full virtualization and I/O > > Hi Mats > > Thanks a lot for your detailed reply! > > You wrote: > > For fully virtualized mode (hardware supported virtual > machine, such as > > AMD-V or Intel VT, aka HVM), there is a different model, > where a "device > > model" is involved to perform the hardware modelling. In > Xen, this is > > using a modified version of qemu (called qemu-dm), which > has a fairly > > complete set of "hardware" in it''s model. It''s got for example IDE > > controller, several types of network devices, graphics and > > mouse/keyboard models. The things you''d usually find in a > PC, that is. > > The way it works is that the hypervisor intercepts IOIO and memory > > mapped IO regions that match the devices involved (such as the > > A0000-BFFFF region for VGA frame buffer memory or the 0x1F0-0x1F7 IO > > ports for the IDE controller), and forwards a request from the > > hypervisor to qemu-dm, where the operation changes the > current state, > > and when it''s necessary, the state-change will result in > for example a > > read-request to the "hard-disk" (which may be a real disk, > a file on a > > local disk, or a file on a network storage device, to give some > > examples). > > This is very interesting. So qemu models the low level device > interface > (I/O interface) in software and translates I/O actions to > either model > changes or to library or system calls (since QEMU runs as > normal process). > > Is there any documentation about this or is the source the doc ;)I haven''t looked for any documentation - for the work I''ve done using QEMU, I''ve just used to source as doc''s. It''s a fairly large project, so there may be some docs somewhere....> > > Do you by ISA mean "Instruction Set Architecture" or > something else (I > > presume it''s NOT meaning ISA-bus...)? > > Yes, I mean instruction set architecture. > > > Intercepting IOIO instructions or MMIO instructions is not > that hard - > > in HVM the two processor architectures have specific intercepts and > > bitmaps to indicate which IO instructions should be > intercepted. MMIO > > will require the page-tables to be set up such that the > memory mapped > > region is mapped "not present" so that any operation to this region > > gives a page-fault, and then the page-fault is analyzed to > see if it''s > > for a MMIO address or for a "real page fault". > > > > For para-virtualization, the model is similar, but the > exact model of > > how to intercept the IOIO or MMIO instruction is slightly > different - > > but in essence it''s the same principle. Let me know if you > really need > > to know how Xen goes about doing this, as it''s quite > complicated (more > > so than the HVM version, for sure). > > Although it is interesting to see how interception works in > detail, I am > currently more interested in how device state is modelled and > translated > into system/library calls or sequences of I/O instructions. > So, in fact > the operation after the interception has taken place.Ok, so if we take as an example a IDE block read, it consists of several IO instructions: Assuming void outb(uint16 port_no, uint8 value) is a outb(0x1f2, sector_count); outb(0x1f3, sector_number); outb(0x1f4, cylinder_lsb); outb(0x1f5, cylinder_lsb); outb(0x1f6, drive_head); outb(0x1f7, command); [In LBA mode, sector_number and the two cylinder numbers (and I think the head part of drive_head) convert into a "large" sector number, rather than cylinder/head/sector combination]. In QEMU, the initial 5 writes will just change the internal state of the IDE controller (i.e. the number of sectors, sector/cylinder numbers, etc, are just stored in some per-controller data structure). Note that disk0 and disk1 on one controller shares the same register set - drive bit out of drive_head selects drive 0 or drive 1. The sixth out (in our sequence, it''s based on the address, not the number of writes), will tell QEMU that the sequence is a complete transaction, and it will go ahead and perform the read of the strage that corresponds to the IDE device (such as a file or partition). The data read is stored in a "per device" buffer, when the code is complete on the device-side, the guest will be informed of this via a virtual interrupt. This will, assuming normal behaviour then trigger a 512-byte (in the form of 16-bit "in" or "ins" instruction with a port address of 0x1f0) where the data is read by the guest into whatever memory it wanted to use. The completion of a write operation, on the other hand, is of course complete first when the 512-byte sector has been written using the "out" or "outs" instruction to port 0x1f0.> > > Not sure what you''re asking for here. Since the devices are either > > modeled after a REAL device (qemu-dm) and as such will resemble as > > closely as possible the REAL hardware device that it''s > emulating, or in > > the frontend/backend driver, there is an "idealized model", > such that > > the request contains just the basic data that the OS > provides normally > > to the driver, and it''s placed in a queue with a message-signaling > > system to tell the other side that it''s got something in the queue. > > I am basically asking about general/theoretical concepts > behind device > modelling as e.g. done by qemu. I think it''s a good idea to > understand how > qemu actually does this. > > > Certainly not - I would say that almost all devices are NOT time > > partitionable, as the state in the device is dependant on > the current > > usage. The more complex the device is, the more likely it is to have > > difficulties, but even such a simple deevice as a serial port would > > struggle to work in a time-shared fashion (not to mention > that serial > > ports generally are used for multiple transactions to make a whole > > "bigger picture transaction", so for example a web-server > connected via > > a serial modem would send a packet of several hundred bytes to the > > serial port driver, which is then portioned out as and when > the serial > > port is ready to send another few bytes. If you switch from > one guest to > > another during this process, and the second guest also has > something to > > send on the serial port, you''d end up with a very scrambled > message from > > the first guest and quite likely the second guests message > completely > > lost!). > > Very nice example. Clearly, high level driver interfaces (e.g. > send/receive, read/write) can be designed in a way that > time-sharing is > possible, e.g. using message/transaction queues. On the I/O > level, it is > likely to be harder to reconstruct the "full transaction". It > might also > be necessary to make assumptions about the actual guest, i.e. > the way the > device is being used.Yes, this is essentially how the frontend/backend drivers work. They send a complete "high level" message to for example send a ethernet packet or write a sector to the disk. As the message is "complete" (not dependant on other messages), it''s entirely possible (and in fact I believe that''s how Xen works) to use a single back-end driver (per device type) for multiple front-end drivers. Of course, if we''re talking about disk access, there is another complication, which has nothing to do with the actual physical interface: the meta-data that is the "filesystem" will also need to be guaranteed to be "correct". Most filesystems have a whole lot of different data structures (such as list of free blocks, directory structures, file-name-to-directory-entry binary tree, etc). If you have two guest operating systems writing to the same "disk", the filesystem will most certainly get corrupted... For example, imagine that both systems are creating a new file, and picks the same block from the free block list... Or deleting files at the same time and putting two different free blocks into the same free block list entry... So, even if you could share the device-interface, the consistency of the actual device would not be good if two guests DID share the disk-interface to the same physical instance of a "disk" (whether it''s ACTUALLY a real disk or a file-based disk-image that "pretends" to be a disk).> > > A particular problem is devices where you can''t necessarily > read back > > the last mode-setting, which may well be the case in many different > > devices. You can''t, for example, read back all the > registers on an IDE > > device, because the read of a particular address amy give the status > > rather than the current comamnd sent, or some such. > > This could be stored in memory when you have a virtual > (in-memory) device > model.Sure, that''s how it works in QEMU - but that requires that you intercept the actual operation and stores what the individual steps of a full transaction. -- Mats> > > Best wishes > > Thomas > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel