[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] mm locking order violation when HVM guest changes graphics mode on virtual graphics adapter.

> Reliably reproducible, occurs when HVM guest changes graphics mode
> virtual graphics adapter on Xen 4.3.0 from Gentoo.
> To reproduce: Using Xen 4.3.0 from Gentoo Portage Tree, and the
> corresponding version of Xl, both built with GCC 4.7.3 with HVM and
> qemu-dm support built in:
> 1. Boot using a Gentoo Linux Dom0 with kernel version 3.10.7-r1 built
> with the kernel config found at http://pastebin.com/GxDpPsk3.
> 2. Get a copy of the fedora i686 network install CD.
> 2. Start a HVM domain with a configuration like the one found at
> http://pastebin.com/p0wxnaTg.
> 3. After connecting to the VNC console, start the install process.
> 4. When Anaconda tries to start the graphical environment, causing the
> kernel to change the graphics mode from the current setting, xen will
> crash with a call to BUG() in mm.h at line 118.
> Xen log can be found at http://pastebin.com/zKCJsp21.
> xl info output can be found at http://pastebin.com/NqtksS18.
> lspci -vvv output can be found at http://pastebin.com/Ja97Cx42.
> xenstore contents can be found at http://pastebin.com/aL9vpxwu.
> I'll be happy to provide any other information you may need upon request.

Thanks for the report.

From what I can glean you are using AMD NPT, can you confirm?

So the trigger is that you are using both PoD and nested virt. To elaborate:
- Setting maxmem to 2G and men to 512M uses the PoD (populate on demand 
subsystem) to account for the 1.5GB of extra wiggle room. Please make sure you 
have a guest balloon that will be able to deal with the guest trying to use 
over 512M.
- You have nestedhvm=1. Do you really need this?

Changing either (memory == maxmem or nestedhvm=0) will remove the problem and 
allow you to make progress.

There is a real bug, however, that needs to be fixed here. At some point in the 
4.3 cycle the flushing of the nested p2m table was added, and it would seem to 
be relinquishing the p2m lock:
__get_gfn_type_access -> grab p2m lock
p2m_pod_demand_populate -> grab pod lock
p2m_next_level -> still holding p2m lock
then drops it
p2m_flush_table -> grabs p2m lock -> KAPOW


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.