|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] bogus gfn - mfn - gfn - mfn checks in guest_physmap_add_
At 14:41 +0000 on 24 Nov (1290609698), Olaf Hering wrote:
> > Another option would be to audit all callers of p2m_is_ram() and check
> > whether they can handle paged-out PFNs (which I though had already been
> > done but seems not to be). Keir's VCPU yield patch might be useful in
> > some of those places too.
>
> I think most if not all is caught by my changes already.
In that case, maybe removing the p2m_paging types (at least those where
using the mfn immediately isn't sensible) from p2m_is_ram() and chasing
the last few users would be the right thing to do.
> What is supposed to happen when building a guest?
>
> I think at some point all (or most) mfns are passed to dom0 and the
> machine_to_phys_mapping array is in a state to reflect that.
That's not necessarily the case - XenServer has a fixed-size dom0 and
leaves all other RAM in the free pools.
> Then the first guest is created.
> How is memory for this guest freed from dom0 and assigned to the guest?
> Why do these mfns are not invalidated in machine_to_phys_mapping[]?
>
> For a guest shutdown p2m_teardown() seems to be the place to do
> invalidate the mfns in machine_to_phys_mapping[].
The problem is that PV guests set their own m2p entries and can't be
relied on to tear them down.
The guest_physmap_add_entry code, and the p2m audit code, would be made
more reliable if, say, alloc_domheap_pages and/or free_domheap_pages
zapped the m2p entries for MFNs they touched.
I think originally that wasn't done because the alloc is quickly
followed by another write of the m2p but that's probably over-keen
optimization.
Tim.
--
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|