I was trying to figure out how hardware irq smp affinity is set by the hypervisor. Looks like at the time of bind request from the dom-0 for a particular pirq, the processor that vcpu happens to be on is set to receive the hardware interrupts corresponding to that irq channel. If dom-0 vcpu to pcpu affinity is not set (dom0_vcpus_pin not set), what happens when dom-0 vcpu migrates - is the processor affinity of the irq channels changed by some means to reflect the migration or do the hardware interrupts end up going to the old processor while the pirq will be served by the dom-0 vcpu on a different processor ? Thanks, - Pradeep Vincent _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 14/4/07 04:33, "Pradeep Vincent" <pradeep.vincent@gmail.com> wrote:> If dom-0 vcpu to pcpu affinity is not set (dom0_vcpus_pin not set), > what happens when dom-0 vcpu migrates - is the processor affinity of > the irq channels changed by some means to reflect the migration or do > the hardware interrupts end up going to the old processor while the > pirq will be served by the dom-0 vcpu on a different processor ?This doesn''t happen right now. What we may need to do is measure the cost of needing to forward the interrupt to the correct CPU, in the case that the VCPU is currently running on a different CPU, versus the cost of reprogramming an IOAPIC register. Also important is to know how rapidly the credit scheduler is moving VCPUs among CPUs, and hence the average number of interrupts between movements. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Wouldn''t it be better to set dom0_vcpus_pin by default when dom0 is handling all the I/O. dom0 clearly should have high priority (weight) to avoid excessive I/O latency and hence none of the dom-0 vcpu would be left waiting in the runq for too long. Only problem would be relatively rare schedule-latency hit due to lack of ability to migrate but I am wondering if this would be better trade-off. Thanks Keir, - Pradeep Vincent On 4/13/07, Keir Fraser <Keir.Fraser@cl.cam.ac.uk> wrote:> On 14/4/07 04:33, "Pradeep Vincent" <pradeep.vincent@gmail.com> wrote: > > > If dom-0 vcpu to pcpu affinity is not set (dom0_vcpus_pin not set), > > what happens when dom-0 vcpu migrates - is the processor affinity of > > the irq channels changed by some means to reflect the migration or do > > the hardware interrupts end up going to the old processor while the > > pirq will be served by the dom-0 vcpu on a different processor ? > > This doesn''t happen right now. What we may need to do is measure the cost of > needing to forward the interrupt to the correct CPU, in the case that the > VCPU is currently running on a different CPU, versus the cost of > reprogramming an IOAPIC register. Also important is to know how rapidly the > credit scheduler is moving VCPUs among CPUs, and hence the average number of > interrupts between movements. > > -- Keir > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 14/4/07 09:06, "Pradeep Vincent" <pradeep.vincent@gmail.com> wrote:> Wouldn''t it be better to set dom0_vcpus_pin by default when dom0 is > handling all the I/O. dom0 clearly should have high priority (weight) > to avoid excessive I/O latency and hence none of the dom-0 vcpu would > be left waiting in the runq for too long. > > Only problem would be relatively rare schedule-latency hit due to lack > of ability to migrate but I am wondering if this would be better > trade-off.If you have a high I/O workload your best bet is probably to dedicate at least one core to dom0, pin it there, and schedule all domU''s onto the remaining cores in the system. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel