[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] domain: use PGC_extra domheap page for shared_info



On Fri, 2020-03-06 at 13:25 +0100, Jan Beulich wrote:
> And likely interrupt remapping tables, device tables, etc. I don't
> have a clear picture on how you want to delineate ones in use in any
> such way from ones indeed free to re-use.

Right. The solution there is two-fold:

For pages in the general population (outside the reserved bootmem), the
responsibility lies with the new Xen. As it processes the live update
information that it receives from the old Xen, it must mark those pages
as in-use so that it doesn't attempt to allocate them.

That's what this bugfix paves the way for — it avoids putting *bad*
pages into the buddy allocator, by setting the page state before the
page is seen by init_heap_pages(), and making init_heap_pages() skip
the pages marked as broken.

It's trivial, then, to make init_heap_pages() *also* skip pages which
get marked as "already in use" when we process the live update data.


The second part, as discussed, is that the old Xen must not put any of
those "needs to be preserved" pages into the reserved bootmem region.

That's what Paul is working on. We stop sharing xenheap pages to
domains, which is part of it — but *also* we need to use the right
allocation for any IOMMU page tables and IRQ remapping tables which
need to be preserved, etc. 

That partly comes out of Hongyan's secret hiding work anyway, since we
no longer get to assume that xenheap pages are already mapped *anyway*,
so we might as *well* be allocating those from domheap.




Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.