>From looking at the code it looks like that interrupt affinity will beset for all physical IRQs and it will be set to the physical processor on which VCPU is running which called request_irq. Can somebody confirm my understanding? Pirq_guest_bind (in arch/x86/irq.c) calls set_affinity (which will translate to dma_msi_set_affinity function in arch/x86/hvm/vmx/vtd/intel-iommu.c for VTd). So that means if request_irq for NIC interrupt is called when a domain with single VCPU is scheduled on physical CPU 1 then NIC interrupt will be bind to physical CPU 1 and later if the same domain is scheduled to physical CPU 0 it won''t get the interrupt until it does a VMEXIT. So for lower interrupt latency we are should pin domain VCPU also. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
The dma_msi_* stuff in intel-iommu.c is not related to this. It looks like an area that needs to be cleaned up a bit. The call to request_irq() is for setting up vt-d fault handler - linking vector with iommu_page_fault(). It is only used when there is a iommu page fault which should not happen if everything is setup correctly. Passthru device interrupt handling is via do_IRQ->do_IRQ_guest->hvm_do_IRQ_dpci path. The ioapic programming for the passthru device was originally setup by the dom0 pci driver. The interrupt of the passthru device always gets handled by xen first and then gets re-inject to the guest via virtual ioapic/lapic models. There is a interrupt latency between the point where physical interrupt occurs and the point virtual interrupt interrupt is injected to the guest - especially if guest''s vcpu is not running. We are still investigating on how to lower this latency. Allen ________________________________ From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Agarwal, Lomesh Sent: Wednesday, October 24, 2007 3:55 PM To: xen-devel@lists.xensource.com Subject: [Xen-devel] interrupt affinity question From looking at the code it looks like that interrupt affinity will be set for all physical IRQs and it will be set to the physical processor on which VCPU is running which called request_irq. Can somebody confirm my understanding? Pirq_guest_bind (in arch/x86/irq.c) calls set_affinity (which will translate to dma_msi_set_affinity function in arch/x86/hvm/vmx/vtd/intel-iommu.c for VTd). So that means if request_irq for NIC interrupt is called when a domain with single VCPU is scheduled on physical CPU 1 then NIC interrupt will be bind to physical CPU 1 and later if the same domain is scheduled to physical CPU 0 it won''t get the interrupt until it does a VMEXIT. So for lower interrupt latency we are should pin domain VCPU also. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
So there is no default interrupt affinity for any physical IRQ in Xen? Is IOAPIC programmed to deliver interrupts in round robin fashion or all interrupts go to one processor only? ________________________________ From: Kay, Allen M Sent: Wednesday, October 24, 2007 5:22 PM To: Agarwal, Lomesh; xen-devel@lists.xensource.com Cc: Han, Weidong Subject: RE: [Xen-devel] interrupt affinity question The dma_msi_* stuff in intel-iommu.c is not related to this. It looks like an area that needs to be cleaned up a bit. The call to request_irq() is for setting up vt-d fault handler - linking vector with iommu_page_fault(). It is only used when there is a iommu page fault which should not happen if everything is setup correctly. Passthru device interrupt handling is via do_IRQ->do_IRQ_guest->hvm_do_IRQ_dpci path. The ioapic programming for the passthru device was originally setup by the dom0 pci driver. The interrupt of the passthru device always gets handled by xen first and then gets re-inject to the guest via virtual ioapic/lapic models. There is a interrupt latency between the point where physical interrupt occurs and the point virtual interrupt interrupt is injected to the guest - especially if guest''s vcpu is not running. We are still investigating on how to lower this latency. Allen ________________________________ From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Agarwal, Lomesh Sent: Wednesday, October 24, 2007 3:55 PM To: xen-devel@lists.xensource.com Subject: [Xen-devel] interrupt affinity question From looking at the code it looks like that interrupt affinity will be set for all physical IRQs and it will be set to the physical processor on which VCPU is running which called request_irq. Can somebody confirm my understanding? Pirq_guest_bind (in arch/x86/irq.c) calls set_affinity (which will translate to dma_msi_set_affinity function in arch/x86/hvm/vmx/vtd/intel-iommu.c for VTd). So that means if request_irq for NIC interrupt is called when a domain with single VCPU is scheduled on physical CPU 1 then NIC interrupt will be bind to physical CPU 1 and later if the same domain is scheduled to physical CPU 0 it won''t get the interrupt until it does a VMEXIT. So for lower interrupt latency we are should pin domain VCPU also. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
It should be the same as linux. Dom0 Linux basically tells xen what value to program into IOAPCI RTE. ________________________________ From: Agarwal, Lomesh Sent: Wednesday, October 24, 2007 8:50 PM To: Kay, Allen M; ''xen-devel@lists.xensource.com'' Cc: Han, Weidong Subject: RE: [Xen-devel] interrupt affinity question So there is no default interrupt affinity for any physical IRQ in Xen? Is IOAPIC programmed to deliver interrupts in round robin fashion or all interrupts go to one processor only? ________________________________ From: Kay, Allen M Sent: Wednesday, October 24, 2007 5:22 PM To: Agarwal, Lomesh; xen-devel@lists.xensource.com Cc: Han, Weidong Subject: RE: [Xen-devel] interrupt affinity question The dma_msi_* stuff in intel-iommu.c is not related to this. It looks like an area that needs to be cleaned up a bit. The call to request_irq() is for setting up vt-d fault handler - linking vector with iommu_page_fault(). It is only used when there is a iommu page fault which should not happen if everything is setup correctly. Passthru device interrupt handling is via do_IRQ->do_IRQ_guest->hvm_do_IRQ_dpci path. The ioapic programming for the passthru device was originally setup by the dom0 pci driver. The interrupt of the passthru device always gets handled by xen first and then gets re-inject to the guest via virtual ioapic/lapic models. There is a interrupt latency between the point where physical interrupt occurs and the point virtual interrupt interrupt is injected to the guest - especially if guest''s vcpu is not running. We are still investigating on how to lower this latency. Allen ________________________________ From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Agarwal, Lomesh Sent: Wednesday, October 24, 2007 3:55 PM To: xen-devel@lists.xensource.com Subject: [Xen-devel] interrupt affinity question From looking at the code it looks like that interrupt affinity will be set for all physical IRQs and it will be set to the physical processor on which VCPU is running which called request_irq. Can somebody confirm my understanding? Pirq_guest_bind (in arch/x86/irq.c) calls set_affinity (which will translate to dma_msi_set_affinity function in arch/x86/hvm/vmx/vtd/intel-iommu.c for VTd). So that means if request_irq for NIC interrupt is called when a domain with single VCPU is scheduled on physical CPU 1 then NIC interrupt will be bind to physical CPU 1 and later if the same domain is scheduled to physical CPU 0 it won''t get the interrupt until it does a VMEXIT. So for lower interrupt latency we are should pin domain VCPU also. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel