[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] mm locking order violation when HVM guest changes graphics mode on virtual graphics adapter.



At 12:05 -0400 on 01 Nov (1383303953), Andres Lagar-Cavilla wrote:
> Paging you in for a bit more insight. The bug is as follows
> 1. pod allocate zero page
> 2. an intermediate level in the p2m needs to be allocated
> 3. thus nested p2m needs to be flushed
> 4. attempts to grab p2m lock on the nested p2m
> 5. explode

Right.  I think the actual ordering is that PoD is reclaiming a zero
page (and therefore removing an existing p2m entry), so 
p2m_pod_zero_check()
 -> p2m_set_entry() (via set_p2m_entry())
 -> hap_write_p2m_entry()
 -> p2m_flush_nested_p2m (tail call, so hap_write_p2m_entry() isn't
    the stack)
 -> p2m_lock() :(

> This is because the current level of locking is pod lock which
> exceeds the p2m level lock. Some solutions:
> 1. defer flushing of nested p2m until we are done with the fault and
> have unrolled enough stack 

That could be a bit intricate, and I think it's probably not safe.
What we're doing here is reclaiming a zero page so we can use it to
serve a PoD fault.  That means we're going to map that page,
guest-writeable, in a new GFN before we release the PoD lock, so we
_must_ be able to flush the old mapping without dropping the PoD lock
(unless we want to add an unlock-and-retry loop in the PoD handler...)

I guess we can avoid that by always keeping at least one PoD page free
per vcpu -- then, I think, we could defer this flush until after PoD
unlock, since all other changes that the PoD code makes are safe not
to flush (replacing not-present entries).  But even then we have a
race window after we pod-unlock and before we finish flushing where
another CPU could pod-lock and map the page.

> 2. have the nested p2m locks look "different" to the lock ordering machinery

Also unpalatable.  But yes, we could invent up a special case to make
p2m_lock()ing a nested p2m into its own thing (ranked below PoD in
mm-locks.h, and safe because we never do PoD or paging ops on nested p2ms).

I think that (1) would be nice to have, since deferring/batching these
nested flushes is probably a good idea in other cases.  But (2) will be
much easier to implement. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.