[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: odd IRQ behavior



> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On
> Behalf Of Cihula, Joseph
> Sent: Thursday, February 26, 2009 5:58 PM
>
> Sometime after c/s 19133 (I tried bisecting but there were some indeterminate 
> results), during
> a shutdown/reboot in the tboot routines when it creates the 1:1 mapping for 
> itself, the
> map_pages_to_xen() call ends up in alloc_domheap_pages() where it triggers 
> the assertion
> 'ASSERT(!in_irq());'.  In addition, and even stranger, is that when resuming 
> from S3 is
> generates another assertion, 'BUG_ON(unlikely(in_irq()));' in 
> invalidate_shadow_ldt().  From
> some debugging, the first assertion on entry is because the irq_count is 1 
> and the second is
> because it's -1.
>
> Adding a irq_exit() before map_pages_to_xen() fixes the first assertion and 
> causes the second,
> which is then fixed by irq_enter() on resume.
>
> But why are these necessary?  Even if we say that something has caused the 
> irq_count to go
> positive before shutdown (but what-it wasn't like this before pulling a more 
> recent tree), the
> irq_exit() that gets rid of the assertion means that the count has gone to 
> 0-so why is it
> negative on resume?

As an additional data point/issue, if I build with debug=y, the 
map_pages_to_xen() call (on a reboot) generates a BUG_ON(seen == !irq_safe) in 
check_lock().  But prior to the map_pages_to_xen() call, we call 
local_irq_disable(), so it should be called as irq_safe.  I'm not sure how to 
fix this.

Joe

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.