[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] bogus gfn - mfn - gfn - mfn checks in guest_physmap_add_entry



On Wed, Nov 24, Tim Deegan wrote:

> I think that adding the paging types (in particular p2m_ram_paged) to
> P2M_RAM_TYPES is a mistake, unless gfn_to_mfn() guarantees to get the
> pfn into a state where it's backed by an MFN before it returns (which it
> probably can't).

Do you mean p2m_mem_paging_evict() should invalidate the mfn, and
p2m_mem_paging_resume() should update the mfn to the current gfn?
My patches do that.

> Another option would be to audit all callers of p2m_is_ram() and check
> whether they can handle paged-out PFNs (which I though had already been
> done but seems not to be).  Keir's VCPU yield patch might be useful in
> some of those places too.

I think most if not all is caught by my changes already.

> > I would guess that if guest_physmap_add_entry() gets a page with type
> > p2m_ram_rw, nothing else can own that page. Is that right?
> > If so, this ASSERT or most of the loop can be removed.
> 
> The loop shouldn't be removed without some more thought about aliasing
> PFNs, and I think that removing the ASSERT plasters over a deeper
> problem.

What is supposed to happen when building a guest?

I think at some point all (or most) mfns are passed to dom0 and the
machine_to_phys_mapping array is in a state to reflect that.

Then the first guest is created.
How is memory for this guest freed from dom0 and assigned to the guest?
Why do these mfns are not invalidated in machine_to_phys_mapping[]?

For a guest shutdown p2m_teardown() seems to be the place to do
invalidate the mfns in machine_to_phys_mapping[].

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.