James Harper
2008-Aug-18 10:53 UTC
[Xen-devel] HVM windows - PCI IRQ firing on both CPU''s
I''m just doing some testing on the gplpv drivers with different ways of handling interrupts, and I''m trying a scheme where each xen device (eg vbd/vif) driver attaches to the same IRQ as the pci driver, and each handles it in sequence. In testing though, I noticed the following when logging what each ISR is doing: 60.32381439 - evtchn event on port 5 60.32384109 - port 5 handler (does some work) 60.32386780 - port 6 handler 60.32389069 - port 7 handler 0.00616564 - evtchn nothing to do 0.00619481 - port 5 handler 0.00621962 - port 6 handler 0.00624393 - port 7 handler The first number is the timestamp (why is the TSC so far out of whack between CPU''s??? Is that a hardware thing or a Xen thing? It causes huge problems with ''ping'' too!!!), the second is the isr that is running. Why is the ISR getting run again immediately on the other CPU? Is this an OS thing or am I not acking the interrupt correctly? Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-Aug-18 12:06 UTC
Re: [Xen-devel] HVM windows - PCI IRQ firing on both CPU''s
On 18/8/08 11:53, "James Harper" <james.harper@bendigoit.com.au> wrote:> The first number is the timestamp (why is the TSC so far out of whack > between CPU''s??? Is that a hardware thing or a Xen thing? It causes huge > problems with ''ping'' too!!!), the second is the isr that is running. > > Why is the ISR getting run again immediately on the other CPU? Is this > an OS thing or am I not acking the interrupt correctly?You should be checking and clearing only vcpu0''s evtchn_upcall_pending and evtchn_pending_sel fields. Other vcpu''s equivalent fields are currently unused for HVM guests. It is essential that you clear evtchn_upcall_pending -- that is the ''virtual interrupt wire'' connected to the virtual PIC''s level-triggered input pin. Apart from those caveats, all should work fine with no spurious interrupts. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
James Harper
2008-Aug-18 12:19 UTC
RE: [Xen-devel] HVM windows - PCI IRQ firing on both CPU''s
> On 18/8/08 11:53, "James Harper" <james.harper@bendigoit.com.au>wrote:> > > The first number is the timestamp (why is the TSC so far out ofwhack> > between CPU''s??? Is that a hardware thing or a Xen thing? It causeshuge> > problems with ''ping'' too!!!), the second is the isr that is running. > > > > Why is the ISR getting run again immediately on the other CPU? Isthis> > an OS thing or am I not acking the interrupt correctly? > > You should be checking and clearing only vcpu0''s evtchn_upcall_pendingand> evtchn_pending_sel fields. Other vcpu''s equivalent fields arecurrently> unused for HVM guests. It is essential that you clear > evtchn_upcall_pending > -- that is the ''virtual interrupt wire'' connected to the virtual PIC''s > level-triggered input pin.Just so I understand, even if I see the IRQ on CPU1, I should always treat it as if it came in on CPU0? The lack of that would explain what I''m seeing. Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-Aug-18 12:26 UTC
Re: [Xen-devel] HVM windows - PCI IRQ firing on both CPU''s
On 18/8/08 13:19, "James Harper" <james.harper@bendigoit.com.au> wrote:> Just so I understand, even if I see the IRQ on CPU1, I should always > treat it as if it came in on CPU0?Yes. Only vcpu0''s event-channel logic is wired into the virtual PIC/IOAPIC. Even if the IOAPIC then forwards the interrupt to a different VCPU, it''s still vcpu0''s event-channel status that initiated the interrupt. Other vcpus'' event-channel statuses do not cause interrupts in HVM.> The lack of that would explain what I''m seeing.It sure would. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
James Harper
2008-Aug-18 12:32 UTC
RE: [Xen-devel] HVM windows - PCI IRQ firing on both CPU''s
> > On 18/8/08 13:19, "James Harper" <james.harper@bendigoit.com.au>wrote:> > > Just so I understand, even if I see the IRQ on CPU1, I should always > > treat it as if it came in on CPU0? > > Yes. Only vcpu0''s event-channel logic is wired into the virtual > PIC/IOAPIC. > Even if the IOAPIC then forwards the interrupt to a different VCPU,it''s> still vcpu0''s event-channel status that initiated the interrupt. Other > vcpus'' event-channel statuses do not cause interrupts in HVM. >I''m not sure if this is a general or a windows specific question, but I can approach this in one of two ways... 1. Make sure the interrupt is only ever delivered to CPU0 by specifying the affinity when I call IoConnectInterrupt 2. Accept the interrupt on any CPU but always use vcpu_info[0] to check the flags etc. Does the hypervisor make any scheduling assumptions upon delivering an event to a domain? (eg does it schedule CPU0 on the basis that that CPU is going to be handling the event?) Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-Aug-18 12:36 UTC
Re: [Xen-devel] HVM windows - PCI IRQ firing on both CPU''s
On 18/8/08 13:32, "James Harper" <james.harper@bendigoit.com.au> wrote:> I''m not sure if this is a general or a windows specific question, but I > can approach this in one of two ways... > > 1. Make sure the interrupt is only ever delivered to CPU0 by specifying > the affinity when I call IoConnectInterrupt > 2. Accept the interrupt on any CPU but always use vcpu_info[0] to check > the flags etc.(2) will suffice. It''s what we do in Linux PV-on-HVM drivers.> Does the hypervisor make any scheduling assumptions upon delivering an > event to a domain? (eg does it schedule CPU0 on the basis that that CPU > is going to be handling the event?)No, the HVM interrupt emulation will cause the correct vcpu to be scheduled (i.e., the one that the IOAPIC/PIC forwards the interrupt to). It''s just that the interrupt pin is hardwired to vcpu0''s event-pending flag. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel