[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] dom0 pvops and rearranging memory layout
On 01/23/2015 12:35 PM, Andrew Cooper wrote: On 23/01/15 10:32, Juergen Gross wrote:Hi, while testing new patches to support dom0 with more than 512 GB I stumbled over an issue which - I think - is present in pvops for some time now. On boot the kernel rearranges the memory layout to match the host E820 map. This is done to be able to access all I/O areas with identity mapped pfns (pfn == mfn). So basically some memory pages change their pfns while the mfns stay the same. There is no check done whether the moved memory areas are actually in use (e.g. via memblock_is_reserved()). This can lead to cases where memory in use is put to an area which is made available for new memory allocations soon afterwards. Memory in question could be the initrd, the p2m map presented to dom0 by the hypervisor, or (hopefully in theory only) even the kernel itself or it's initial page tables built by the hypervisor. In my test I had a p2m map of nearly 2GB size and the area between 2GB and 4GB had no RAM. So parts of the p2m map and the complete initrd where subject to be remapped which led to an early PANIC. I'll try to add some special handling for the initrd and the p2m map. In case someone has a better idea: please tell me.The relocation is done based only on the e820 is it not? Yes. I wonder whether it might be reasonable to extend contruct_dom0/libelf to avoid constructing a p2m where pfns of built data (kernel, initrd, p2m and initial pagetables) aliased with host io regions. That was my first idea, too. OTOH this would require a rather new hypervisor with this functionality to be able to run a pvops dom0 on such a machine. Ans can we be sure that an existing non-pvops dom0 (or even an old pvops one) can work with such a change? Juergen _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |