Agarwal, Lomesh
2007-Oct-26 00:36 UTC
[Xen-devel] problem in setting cpumask for physical interrupt
Why does function pirq_guest_bind (in arch/x86/irq.c) calls set_affinity with cpumask of current processor? If I understand correctly pirq_guest_bind is called in response to guest calling request_irq. So, if by chance all guests call request_irq on the same physical processor then Xen may end up setting interrupt affinity to one physical processor only. I think Xen should set the affinity to all the processors available. VCPU is not guranteed to run on the same physical processor on which it called request_irq anyway. I will send a patch if my understanding looks ok. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Oct-26 06:42 UTC
Re: [Xen-devel] problem in setting cpumask for physical interrupt
An event channel can only be bound to one VCPU at a time. The IRQ should be bound to the CPU that that VCPU runs on. -- Keir On 26/10/07 01:36, "Agarwal, Lomesh" <lomesh.agarwal@intel.com> wrote:> Why does function pirq_guest_bind (in arch/x86/irq.c) calls set_affinity with > cpumask of current processor? If I understand correctly pirq_guest_bind is > called in response to guest calling request_irq. So, if by chance all guests > call request_irq on the same physical processor then Xen may end up setting > interrupt affinity to one physical processor only. > I think Xen should set the affinity to all the processors available. VCPU is > not guranteed to run on the same physical processor on which it called > request_irq anyway. > I will send a patch if my understanding looks ok. > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Agarwal, Lomesh
2007-Oct-26 18:06 UTC
RE: [Xen-devel] problem in setting cpumask for physical interrupt
Function pirq_guest_bind is called for physical device IRQ. Right? Even if event channel is bound to one VCPU why do we need to bind physical IRQ to a particular physical CPU. VCPU is not guaranteed to run on same physical processor anyway. So, if Xen sets interrupt affinity for physical IRQ to all the physical processor IOAPIC will send that physical IRQ to all physical processors in round robin manner. That should give better interrupt latency for physical IRQs. ________________________________ From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] Sent: Thursday, October 25, 2007 11:42 PM To: Agarwal, Lomesh; xen-devel@lists.xensource.com Subject: Re: [Xen-devel] problem in setting cpumask for physical interrupt An event channel can only be bound to one VCPU at a time. The IRQ should be bound to the CPU that that VCPU runs on. -- Keir On 26/10/07 01:36, "Agarwal, Lomesh" <lomesh.agarwal@intel.com> wrote: Why does function pirq_guest_bind (in arch/x86/irq.c) calls set_affinity with cpumask of current processor? If I understand correctly pirq_guest_bind is called in response to guest calling request_irq. So, if by chance all guests call request_irq on the same physical processor then Xen may end up setting interrupt affinity to one physical processor only. I think Xen should set the affinity to all the processors available. VCPU is not guranteed to run on the same physical processor on which it called request_irq anyway. I will send a patch if my understanding looks ok. ________________________________ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel