[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] dom0 pvops and rearranging memory layout

On 01/23/2015 04:09 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Jan 23, 2015 at 11:32:20AM +0100, Juergen Gross wrote:

while testing new patches to support dom0 with more than 512 GB I
stumbled over an issue which - I think - is present in pvops for
some time now.

On boot the kernel rearranges the memory layout to match the host
E820 map. This is done to be able to access all I/O areas with
identity mapped pfns (pfn == mfn). So basically some memory pages
change their pfns while the mfns stay the same.

There is no check done whether the moved memory areas are actually
in use (e.g. via memblock_is_reserved()). This can lead to cases
where memory in use is put to an area which is made available for
new memory allocations soon afterwards. Memory in question could
be the initrd, the p2m map presented to dom0 by the hypervisor, or
(hopefully in theory only) even the kernel itself or it's initial
page tables built by the hypervisor.

In my test I had a p2m map of nearly 2GB size and the area between

Oh my. That is huge. Could you compress it? This would require of course
a new type of P2M - where would mark which MFNs are contingous.

And then during booting you could read over and find these special
ones and when creating the new P2M do the right uncompression?

I don't think that's the correct solution. It would require a new
hypervisor as well and we still wouldn't have a guarantee it will

That's "only" for 1TB of memory. I think we want to support much

And even if the p2m is okay, a huge initrd will still blow us up.


2GB and 4GB had no RAM. So parts of the p2m map and the complete
initrd where subject to be remapped which led to an early PANIC.

I'll try to add some special handling for the initrd and the p2m
map. In case someone has a better idea: please tell me.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.