[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen irq unmask bug brainstroming



>>> On 15.02.11 at 07:28, "Zhang, Fengzhe" <fengzhe.zhang@xxxxxxxxx> wrote:
> Hi, we found a bug related to xen spin unlock ipi. Looking forward to 
> brainstorming for a clean fixup.
> 
> How the bug happens:
> 1. Dom0 poweroff.
> 2. CPU0 takes down other CPUs.
> 3. IRQs are unmasked in function fixup_irqs on other CPUs.
> 4. IPI IRQ for "lock_kicker_irq" is unmasked (which should never happen).
> 5. Other CPUs receives lock_kicker_irq and dummy_handler (handler for ipi 
> XEN_SPIN_UNLOCK_VECTOR) is invoked.
> 6. Dummy_handler reports bug and crashes Dom0.
> 
> Main cause:
> Function fixup_irqs masks and then unmasks each irq when taking cpus down. 
> And Xen irq_chip structure does not distinguish disable_ops from mask_ops. So 
> when the lock_kicker_irq is unmasked, it is effectively re-enabled.
> 
> A possible fixup:
> Provide a dedicated disable_ops for xen irq_chip structure. Prevent 
> unmask_ops to enable irqs that are disabled.

Other alternatives (based on what we do in non-pvops, where we
don't have this problem): Either mark the kicker IRQ properly as
IRQ_PER_CPU (IRQF_PERCPU is being passed, but this additionally
requires CONFIG_IRQ_PER_CPU to be set), and then exclude
per-CPU IRQs from being fixed up (which they obviously should be).

Or don't use the kernel's IRQ subsystem altogether, and instead
directly map the kick logic to event channels. (This is what we do,
but we have the per-CPU handling above in place nevertheless
to cover IPIs and timer vIRQ.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.