Hello, I''ve had a look at PVH support, and I have a few questions: - events are still dispatched the PV way through the callback, right? - I guess FPU errors don''t trigger an INT13, so I don''t need to handle that? - How about the console and store MFNs from the boot info? Are they still MFNs, or actually PFNs? - How about PV network in non-copy mode? It used to be done with a page transfer, which the frontend would free, does XENMEM_decrease_reservation still use MFNs, or PFNs? Samuel
I forgot: Samuel Thibault, le Mon 28 Jan 2013 02:59:30 +0100, a écrit :> - events are still dispatched the PV way through the callback, right? > - I guess FPU errors don''t trigger an INT13, so I don''t need to handle > that? > - How about the console and store MFNs from the boot info? Are they > still MFNs, or actually PFNs? > - How about PV network in non-copy mode? It used to be done > with a page transfer, which the frontend would free, does > XENMEM_decrease_reservation still use MFNs, or PFNs?- What does hvm_callback_vector mean exactly? Samuel
On Mon, 28 Jan 2013, Samuel Thibault wrote:> Hello, > > I''ve had a look at PVH support, and I have a few questions: > > - events are still dispatched the PV way through the callback, right?No, they are injected as an X86 vector (0xf3).> - I guess FPU errors don''t trigger an INT13, so I don''t need to handle > that?I think that''s right, but Mukesh can confirm this.> - How about the console and store MFNs from the boot info? Are they > still MFNs, or actually PFNs?PFNs> - How about PV network in non-copy mode? It used to be done > with a page transfer, which the frontend would free, does > XENMEM_decrease_reservation still use MFNs, or PFNs?PFNs Mukesh, did I get it right? Would you be up for writing down these basic pieces of information regarding the PVH interface on a wiki page? So that other kernel hackers like Samuel can port their favourite open source kernel to it? Maybe add something about the shared_info page and the grant_table too.
On Mon, Jan 28, 2013 at 03:09:57PM +0000, Stefano Stabellini wrote:> On Mon, 28 Jan 2013, Samuel Thibault wrote: > > Hello, > > > > I''ve had a look at PVH support, and I have a few questions: > > > > - events are still dispatched the PV way through the callback, right? > > No, they are injected as an X86 vector (0xf3).This is a non-PVH question, but the X86 vector injection only works on CPU0 right? Which means that for HVM backends all of the events are coalesced in one vector and worst yet - they are not perCPU - so end up with IPI-ing the other CPUs. Stefano, you were the original author of this - what would be needed to get this to work across multitple CPUs and such?> > > > - I guess FPU errors don''t trigger an INT13, so I don''t need to handle > > that? > > I think that''s right, but Mukesh can confirm this. > > > > - How about the console and store MFNs from the boot info? Are they > > still MFNs, or actually PFNs? > > PFNs > > > > - How about PV network in non-copy mode? It used to be done > > with a page transfer, which the frontend would free, does > > XENMEM_decrease_reservation still use MFNs, or PFNs? > > PFNs > > Mukesh, did I get it right? > Would you be up for writing down these basic pieces of information > regarding the PVH interface on a wiki page? > So that other kernel hackers like Samuel can port their favourite open > source kernel to it? > Maybe add something about the shared_info page and the grant_table too.
Stefano Stabellini, le Mon 28 Jan 2013 15:09:57 +0000, a écrit :> On Mon, 28 Jan 2013, Samuel Thibault wrote: > > I''ve had a look at PVH support, and I have a few questions: > > > > - events are still dispatched the PV way through the callback, right? > > No, they are injected as an X86 vector (0xf3).Without any error code being pushed on the stack, I guess? This is a detail that had catched me in the past with vector 0x0f :) Samuel
>>> On 28.01.13 at 17:06, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote: > On Mon, Jan 28, 2013 at 03:09:57PM +0000, Stefano Stabellini wrote: >> On Mon, 28 Jan 2013, Samuel Thibault wrote: >> > Hello, >> > >> > I''ve had a look at PVH support, and I have a few questions: >> > >> > - events are still dispatched the PV way through the callback, right? >> >> No, they are injected as an X86 vector (0xf3). > > This is a non-PVH question, but the X86 vector injection only works > on CPU0 right? Which means that for HVM backends all of the events > are coalesced in one vector and worst yet - they are not perCPU - so > end up with IPI-ing the other CPUs. Stefano, you were the original > author of this - what would be needed to get this to work across > multitple CPUs and such?Iirc the vector callback was added to overcome precisely that limitation of the original PCI IRQ delivery method. Jan
On Mon, 28 Jan 2013, Jan Beulich wrote:> >>> On 28.01.13 at 17:06, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote: > > On Mon, Jan 28, 2013 at 03:09:57PM +0000, Stefano Stabellini wrote: > >> On Mon, 28 Jan 2013, Samuel Thibault wrote: > >> > Hello, > >> > > >> > I''ve had a look at PVH support, and I have a few questions: > >> > > >> > - events are still dispatched the PV way through the callback, right? > >> > >> No, they are injected as an X86 vector (0xf3). > > > > This is a non-PVH question, but the X86 vector injection only works > > on CPU0 right? Which means that for HVM backends all of the events > > are coalesced in one vector and worst yet - they are not perCPU - so > > end up with IPI-ing the other CPUs. Stefano, you were the original > > author of this - what would be needed to get this to work across > > multitple CPUs and such? > > Iirc the vector callback was added to overcome precisely that > limitation of the original PCI IRQ delivery method.That''s right: the vector callback should already work on any guest CPUs.
On Mon, 28 Jan 2013 15:09:57 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:> On Mon, 28 Jan 2013, Samuel Thibault wrote: > > Hello, > > > > I''ve had a look at PVH support, and I have a few questions: > > > > - events are still dispatched the PV way through the callback, > > right? > > No, they are injected as an X86 vector (0xf3). > > > > - I guess FPU errors don''t trigger an INT13, so I don''t need to > > handle that? > > I think that''s right, but Mukesh can confirm this. > > > > - How about the console and store MFNs from the boot info? Are they > > still MFNs, or actually PFNs? > > PFNs > > > > - How about PV network in non-copy mode? It used to be done > > with a page transfer, which the frontend would free, does > > XENMEM_decrease_reservation still use MFNs, or PFNs? > > PFNs > > Mukesh, did I get it right?Yes.> Would you be up for writing down these basic pieces of information > regarding the PVH interface on a wiki page? > So that other kernel hackers like Samuel can port their favourite open > source kernel to it? > Maybe add something about the shared_info page and the grant_table > too.Sure. Lets collect questions and I''ll do that after my version 2 patch is out. thaks, mukesh
Hello, A couple more questions: - Does hvm_callback_vector flags designates routing events through interrupt vector 0xf3? - Is PIC/APIC I/O emulated by the hypervisor, as well as PIT? And does hvm_callback_vector designates that fact? Samuel
On Wed, 30 Jan 2013, Samuel Thibault wrote:> Hello, > > A couple more questions: > > - Does hvm_callback_vector flags designates routing events through > interrupt vector 0xf3?XENFEAT_hvm_callback_vector determins the hypervisor capability, while you need an HVMOP_set_param hypercall to set HVM_PARAM_CALLBACK_IRQ to enable it. Actually the vector number can be chosen.> - Is PIC/APIC I/O emulated by the hypervisor, as well as PIT?Yes (hvm_domain_initialise is called for PVH guests).> And does > hvm_callback_vector designates that fact?PVH implies it
Stefano Stabellini, le Wed 30 Jan 2013 14:28:39 +0000, a écrit :> On Wed, 30 Jan 2013, Samuel Thibault wrote: > > A couple more questions: > > > > - Does hvm_callback_vector flags designates routing events through > > interrupt vector 0xf3? > > XENFEAT_hvm_callback_vector determins the hypervisor capability, while > you need an HVMOP_set_param hypercall to set HVM_PARAM_CALLBACK_IRQ to > enable it.Ok, but I mean: does XENFEAT_hvm_callback_vector mean that HVM_PARAM_CALLBACK_IRQ can be set?> Actually the vector number can be chosen.Ok.> > - Is PIC/APIC I/O emulated by the hypervisor, as well as PIT? > > Yes (hvm_domain_initialise is called for PVH guests).Ok, good!> > And does hvm_callback_vector designates that fact? > > PVH implies itI know, but there''s not "PVH" flag in the hypervisor capabilities, that''s why I''m asking which flag advertises the capability. Samuel
> Stefano Stabellini, le Wed 30 Jan 2013 14:28:39 +0000, a écrit : > > On Wed, 30 Jan 2013, Samuel Thibault wrote: > > > A couple more questions: > > > > > > - Does hvm_callback_vector flags designates routing events through > > > interrupt vector 0xf3? > > > > XENFEAT_hvm_callback_vector determins the hypervisor capability, while > > you need an HVMOP_set_param hypercall to set HVM_PARAM_CALLBACK_IRQ to > > enable it. > > Ok, but I mean: does XENFEAT_hvm_callback_vector mean that > HVM_PARAM_CALLBACK_IRQ can be set?Yes.> > > And does hvm_callback_vector designates that fact? > > > > PVH implies it > > I know, but there''s not "PVH" flag in the hypervisor capabilities, > that''s why I''m asking which flag advertises the capability.Good point. It is safe to assume that XENFEAT_hvm_callback_vector means a vector injection via lapic, therefore Xen has to provide one. However I wouldn''t assume anything about the presence of an IO-APIC and a PIT, even though the current patch series would make them available. In fact you shouldn''t have to use them at all: no devices or interrupts should go through the IO-APIC and you can use the PV timer for timer interrupts. I would argue that it might be a good idea not to emulate them at all for PVH guests to avoid confusions. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Stefano Stabellini, le Wed 30 Jan 2013 16:08:18 +0000, a écrit :> you can use the PV timer for timer interrupts.Ah, right, the PV timer is via an event, so it''s available, no need for a PIT indeed. Samuel
Stefano Stabellini, le Wed 30 Jan 2013 16:08:18 +0000, a écrit :> I would argue that it might be a good idea not to emulate them at all > for PVH guests to avoid confusions.Well, having a PIT is nice to avoid having to add support for the PV timer, but that''s not very difficult to implement indeed. Samuel
Stefano Stabellini, le Mon 28 Jan 2013 15:09:57 +0000, a écrit :> On Mon, 28 Jan 2013, Samuel Thibault wrote: > > Hello, > > > > I''ve had a look at PVH support, and I have a few questions: > > > > - events are still dispatched the PV way through the callback, right? > > No, they are injected as an X86 vector (0xf3).But is that mandatory? Can''t we still call set_callbacks? Samuel
Samuel Thibault, le Thu 31 Jan 2013 02:02:28 +0100, a écrit :> Stefano Stabellini, le Mon 28 Jan 2013 15:09:57 +0000, a écrit : > > On Mon, 28 Jan 2013, Samuel Thibault wrote: > > > Hello, > > > > > > I''ve had a look at PVH support, and I have a few questions: > > > > > > - events are still dispatched the PV way through the callback, right? > > > > No, they are injected as an X86 vector (0xf3). > > But is that mandatory? Can''t we still call set_callbacks?(Mmm, I guess it''s not so simple with VMENTER) Samuel
On Thu, 31 Jan 2013, Samuel Thibault wrote:> Stefano Stabellini, le Mon 28 Jan 2013 15:09:57 +0000, a écrit : > > On Mon, 28 Jan 2013, Samuel Thibault wrote: > > > Hello, > > > > > > I''ve had a look at PVH support, and I have a few questions: > > > > > > - events are still dispatched the PV way through the callback, right? > > > > No, they are injected as an X86 vector (0xf3). > > But is that mandatory? Can''t we still call set_callbacks?The vector injection is mandatory, but you can choose the vector number using HVMOP_set_param HVM_PARAM_CALLBACK_IRQ, like I wrote in the other email. HYPERVISOR_set_callbacks shouldn''t be called for PVH guests. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Possibly Parallel Threads
- [PATCH]: PVH: specify xen features strings cleany for PVH
- [PATCH] PVH: remove code to map iomem from guest
- [PATCH RFC v13 00/20] Introduce PVH domU support
- [PATCH RFC v2] pvh: clearly specify used parameters in vcpu_guest_context
- Xen 4.4 development update: Is PVH a blocker?