The "mask" covered all online cpus in the "domain". It should be used as destination later, instead of using "domain" directly. Signed-off-by: Sheng Yang <sheng@linux.intel.com> -- diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -86,14 +86,14 @@ cpus_and(mask, domain, cpu_online_map); if (cpus_empty(mask)) return -EINVAL; - if ((cfg->vector == vector) && cpus_equal(cfg->domain, domain)) + if ((cfg->vector == vector) && cpus_equal(cfg->domain, mask)) return 0; if (cfg->vector != IRQ_VECTOR_UNASSIGNED) return -EBUSY; for_each_cpu_mask(cpu, mask) per_cpu(vector_irq, cpu)[vector] = irq; cfg->vector = vector; - cfg->domain = domain; + cfg->domain = mask; irq_status[irq] = IRQ_USED; if (IO_APIC_IRQ(irq)) irq_vector[irq] = vector; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
By the way, could an IRQ''s ''domain'' be given a better name in Xen? We already have a meaning for domain, and it makes the code very confusing! Can we call it cpu_affinity or cpu_binding, or something a bit more meaningful and distinguishable? -- Keir On 26/08/2010 10:14, "Sheng Yang" <sheng@linux.intel.com> wrote:> The "mask" covered all online cpus in the "domain". It should be used as > destination later, instead of using "domain" directly. > > Signed-off-by: Sheng Yang <sheng@linux.intel.com> > > -- > diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c > --- a/xen/arch/x86/irq.c > +++ b/xen/arch/x86/irq.c > @@ -86,14 +86,14 @@ > cpus_and(mask, domain, cpu_online_map); > if (cpus_empty(mask)) > return -EINVAL; > - if ((cfg->vector == vector) && cpus_equal(cfg->domain, domain)) > + if ((cfg->vector == vector) && cpus_equal(cfg->domain, mask)) > return 0; > if (cfg->vector != IRQ_VECTOR_UNASSIGNED) > return -EBUSY; > for_each_cpu_mask(cpu, mask) > per_cpu(vector_irq, cpu)[vector] = irq; > cfg->vector = vector; > - cfg->domain = domain; > + cfg->domain = mask; > irq_status[irq] = IRQ_USED; > if (IO_APIC_IRQ(irq)) > irq_vector[irq] = vector;_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Thursday 26 August 2010 17:22:29 Keir Fraser wrote:> By the way, could an IRQ''s ''domain'' be given a better name in Xen? We > already have a meaning for domain, and it makes the code very confusing! > Can we call it cpu_affinity or cpu_binding, or something a bit more > meaningful and distinguishable?Or use cpu_mask directly? Would send an separate patch if you like, for whatever name. :) -- regards Yang, Sheng> > -- Keir > > On 26/08/2010 10:14, "Sheng Yang" <sheng@linux.intel.com> wrote: > > The "mask" covered all online cpus in the "domain". It should be used as > > destination later, instead of using "domain" directly. > > > > Signed-off-by: Sheng Yang <sheng@linux.intel.com> > > > > -- > > diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c > > --- a/xen/arch/x86/irq.c > > +++ b/xen/arch/x86/irq.c > > @@ -86,14 +86,14 @@ > > > > cpus_and(mask, domain, cpu_online_map); > > if (cpus_empty(mask)) > > > > return -EINVAL; > > > > - if ((cfg->vector == vector) && cpus_equal(cfg->domain, domain)) > > + if ((cfg->vector == vector) && cpus_equal(cfg->domain, mask)) > > > > return 0; > > > > if (cfg->vector != IRQ_VECTOR_UNASSIGNED) > > > > return -EBUSY; > > > > for_each_cpu_mask(cpu, mask) > > > > per_cpu(vector_irq, cpu)[vector] = irq; > > > > cfg->vector = vector; > > > > - cfg->domain = domain; > > + cfg->domain = mask; > > > > irq_status[irq] = IRQ_USED; > > if (IO_APIC_IRQ(irq)) > > > > irq_vector[irq] = vector;_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 26/08/2010 10:40, "Sheng Yang" <sheng@linux.intel.com> wrote:> On Thursday 26 August 2010 17:22:29 Keir Fraser wrote: >> By the way, could an IRQ''s ''domain'' be given a better name in Xen? We >> already have a meaning for domain, and it makes the code very confusing! >> Can we call it cpu_affinity or cpu_binding, or something a bit more >> meaningful and distinguishable? > > Or use cpu_mask directly? Would send an separate patch if you like, for > whatever > name. :)Yes, cpu_mask would be fine. I applied your other two patches now. So send a patch against http://xenbits.xen.org/staging/xen-unstable.hg Thanks, Keir> -- > regards > Yang, Sheng > >> >> -- Keir >> >> On 26/08/2010 10:14, "Sheng Yang" <sheng@linux.intel.com> wrote: >>> The "mask" covered all online cpus in the "domain". It should be used as >>> destination later, instead of using "domain" directly. >>> >>> Signed-off-by: Sheng Yang <sheng@linux.intel.com> >>> >>> -- >>> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c >>> --- a/xen/arch/x86/irq.c >>> +++ b/xen/arch/x86/irq.c >>> @@ -86,14 +86,14 @@ >>> >>> cpus_and(mask, domain, cpu_online_map); >>> if (cpus_empty(mask)) >>> >>> return -EINVAL; >>> >>> - if ((cfg->vector == vector) && cpus_equal(cfg->domain, domain)) >>> + if ((cfg->vector == vector) && cpus_equal(cfg->domain, mask)) >>> >>> return 0; >>> >>> if (cfg->vector != IRQ_VECTOR_UNASSIGNED) >>> >>> return -EBUSY; >>> >>> for_each_cpu_mask(cpu, mask) >>> >>> per_cpu(vector_irq, cpu)[vector] = irq; >>> >>> cfg->vector = vector; >>> >>> - cfg->domain = domain; >>> + cfg->domain = mask; >>> >>> irq_status[irq] = IRQ_USED; >>> if (IO_APIC_IRQ(irq)) >>> >>> irq_vector[irq] = vector;_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel