Hi, we found a bug related to xen spin unlock ipi. Looking forward to brainstorming for a clean fixup. How the bug happens: 1. Dom0 poweroff. 2. CPU0 takes down other CPUs. 3. IRQs are unmasked in function fixup_irqs on other CPUs. 4. IPI IRQ for "lock_kicker_irq" is unmasked (which should never happen). 5. Other CPUs receives lock_kicker_irq and dummy_handler (handler for ipi XEN_SPIN_UNLOCK_VECTOR) is invoked. 6. Dummy_handler reports bug and crashes Dom0. Main cause: Function fixup_irqs masks and then unmasks each irq when taking cpus down. And Xen irq_chip structure does not distinguish disable_ops from mask_ops. So when the lock_kicker_irq is unmasked, it is effectively re-enabled. A possible fixup: Provide a dedicated disable_ops for xen irq_chip structure. Prevent unmask_ops to enable irqs that are disabled. -Fengzhe Zhang _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 15.02.11 at 07:28, "Zhang, Fengzhe" <fengzhe.zhang@intel.com> wrote: > Hi, we found a bug related to xen spin unlock ipi. Looking forward to > brainstorming for a clean fixup. > > How the bug happens: > 1. Dom0 poweroff. > 2. CPU0 takes down other CPUs. > 3. IRQs are unmasked in function fixup_irqs on other CPUs. > 4. IPI IRQ for "lock_kicker_irq" is unmasked (which should never happen). > 5. Other CPUs receives lock_kicker_irq and dummy_handler (handler for ipi > XEN_SPIN_UNLOCK_VECTOR) is invoked. > 6. Dummy_handler reports bug and crashes Dom0. > > Main cause: > Function fixup_irqs masks and then unmasks each irq when taking cpus down. > And Xen irq_chip structure does not distinguish disable_ops from mask_ops. So > when the lock_kicker_irq is unmasked, it is effectively re-enabled. > > A possible fixup: > Provide a dedicated disable_ops for xen irq_chip structure. Prevent > unmask_ops to enable irqs that are disabled.Other alternatives (based on what we do in non-pvops, where we don''t have this problem): Either mark the kicker IRQ properly as IRQ_PER_CPU (IRQF_PERCPU is being passed, but this additionally requires CONFIG_IRQ_PER_CPU to be set), and then exclude per-CPU IRQs from being fixed up (which they obviously should be). Or don''t use the kernel''s IRQ subsystem altogether, and instead directly map the kick logic to event channels. (This is what we do, but we have the per-CPU handling above in place nevertheless to cover IPIs and timer vIRQ.) Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 2011/2/15 18:28, Jan Beulich wrote:>>>> On 15.02.11 at 07:28, "Zhang, Fengzhe"<fengzhe.zhang@intel.com> wrote: >> Hi, we found a bug related to xen spin unlock ipi. Looking forward to >> brainstorming for a clean fixup. >> >> How the bug happens: >> 1. Dom0 poweroff. >> 2. CPU0 takes down other CPUs. >> 3. IRQs are unmasked in function fixup_irqs on other CPUs. >> 4. IPI IRQ for "lock_kicker_irq" is unmasked (which should never happen). >> 5. Other CPUs receives lock_kicker_irq and dummy_handler (handler for ipi >> XEN_SPIN_UNLOCK_VECTOR) is invoked. >> 6. Dummy_handler reports bug and crashes Dom0. >> >> Main cause: >> Function fixup_irqs masks and then unmasks each irq when taking cpus down. >> And Xen irq_chip structure does not distinguish disable_ops from mask_ops. So >> when the lock_kicker_irq is unmasked, it is effectively re-enabled. >> >> A possible fixup: >> Provide a dedicated disable_ops for xen irq_chip structure. Prevent >> unmask_ops to enable irqs that are disabled. > > Other alternatives (based on what we do in non-pvops, where we > don''t have this problem): Either mark the kicker IRQ properly as > IRQ_PER_CPU (IRQF_PERCPU is being passed, but this additionally > requires CONFIG_IRQ_PER_CPU to be set), and then exclude > per-CPU IRQs from being fixed up (which they obviously should be). > > Or don''t use the kernel''s IRQ subsystem altogether, and instead > directly map the kick logic to event channels. (This is what we do, > but we have the per-CPU handling above in place nevertheless > to cover IPIs and timer vIRQ.) > > Jan >Can we safely set CONFIG_IRQ_PER_CPU in current pvops kernel? -Fengzhe _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 16.02.11 at 05:12, Fengzhe Zhang <fengzhe.zhang@intel.com> wrote: > On 2011/2/15 18:28, Jan Beulich wrote: >>>>> On 15.02.11 at 07:28, "Zhang, Fengzhe"<fengzhe.zhang@intel.com> wrote: >>> Hi, we found a bug related to xen spin unlock ipi. Looking forward to >>> brainstorming for a clean fixup. >>> >>> How the bug happens: >>> 1. Dom0 poweroff. >>> 2. CPU0 takes down other CPUs. >>> 3. IRQs are unmasked in function fixup_irqs on other CPUs. >>> 4. IPI IRQ for "lock_kicker_irq" is unmasked (which should never happen). >>> 5. Other CPUs receives lock_kicker_irq and dummy_handler (handler for ipi >>> XEN_SPIN_UNLOCK_VECTOR) is invoked. >>> 6. Dummy_handler reports bug and crashes Dom0. >>> >>> Main cause: >>> Function fixup_irqs masks and then unmasks each irq when taking cpus down. >>> And Xen irq_chip structure does not distinguish disable_ops from mask_ops. > So >>> when the lock_kicker_irq is unmasked, it is effectively re-enabled. >>> >>> A possible fixup: >>> Provide a dedicated disable_ops for xen irq_chip structure. Prevent >>> unmask_ops to enable irqs that are disabled. >> >> Other alternatives (based on what we do in non-pvops, where we >> don''t have this problem): Either mark the kicker IRQ properly as >> IRQ_PER_CPU (IRQF_PERCPU is being passed, but this additionally >> requires CONFIG_IRQ_PER_CPU to be set), and then exclude >> per-CPU IRQs from being fixed up (which they obviously should be). >> >> Or don''t use the kernel''s IRQ subsystem altogether, and instead >> directly map the kick logic to event channels. (This is what we do, >> but we have the per-CPU handling above in place nevertheless >> to cover IPIs and timer vIRQ.) >> >> Jan >> > > Can we safely set CONFIG_IRQ_PER_CPU in current pvops kernel?I think so, but you''ll need to get this accepted by the x86 maintainers anyway, so perhaps asking for their opinion would be useful. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel