[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] xen: allow guest_remove_page to remove p2m_mmio_direct pages



>>> On 18.06.14 at 13:38, <roger.pau@xxxxxxxxxx> wrote:
> IF a guest tries to do a foreign/grant mapping in a memory region
> marked as p2m_mmio_direct Xen will complain with the following
> message:
> 
> (XEN) memory.c:241:d0v0 Bad page free for domain 0
> 
> Albeit the mapping will succeed. This is specially problematic for PVH
> Dom0, in which we map all the e820 holes and memory up to 4GB as
> p2m_mmio_direct.
> 
> In order to deal with it, add a special casing for p2m_mmio_direct
> regions in guest_remove_page if the domain is a hardware domain, that
> calls clear_mmio_p2m_entry in order to remove the mappings.
> 
> Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx>
> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> Cc: Keir Fraser <keir@xxxxxxx>
> Cc: Tim Deegan <tim@xxxxxxx>
> ---
> I've spoken with Jan about this issue on IRC, and he felt that adding
> such a special casing in guest_remove_page is not the right thing to
> do, I could look into two other approaches at least:
> 
>  - Make XENMEM_remove_from_physmap able to remove p2m_mmio_direct
>    ranges, right now it doesn't work with those ranges because
>    get_page_from_gfn fails when used on p2m_mmio_direct p2m entries.
> 
>  - Extend HVMOP_set_mem_type to be able to perform the conversion from
>    p2m_mmio_direct to p2m_invalid and vice versa on Dom0 request.
> 
> I have to admit I don't like any of these two options, because it
> would mean we will need to perform another hypercall before mapping,
> which will make it even slower.

Looking at where and how you do the change you prefer makes
me even more certain this isn't the right thing.

> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -204,6 +204,12 @@ int guest_remove_page(struct domain *d, unsigned long 
> gmfn)
>          p2m_mem_paging_drop_page(d, gmfn, p2mt);
>          return 1;
>      }
> +    if ( is_hardware_domain(d) && p2mt == p2m_mmio_direct )
> +    {
> +        clear_mmio_p2m_entry(d, gmfn);
> +        put_gfn(d, gmfn);
> +        return 1;
> +    }
>  #else
>      mfn = gmfn_to_mfn(d, gmfn);
>  #endif

First of all, the #ifdef CONFIG_X86 block you add this to is wrongly
tagged as x86 - it really is specific to whether the architecture
supports paging. As a consequence, this would have to go into a
separate similar conditional, and I guess you then won't completely
disagree that these x86-special code portions in a supposedly
generic functions feel wrong.

And then you blindly call clear_mmio_p2m_entry() for all
p2m_mmio_direct entries - I'd suggest to minimally restrict this to
such where the MFN in INVALID_MFN, such that real MMIO ranges
would be kept alive. But wait - I think on IRC you said that all these
holes look like real ranges. In which cases we'd have no choice but
allow Dom0 to shoot itself in the foot.

Finally, how is this same situation going to be handled for PVH driver
domains or disaggregated PVH control domains? The
is_hardware_domain() here seems too much of a special case too...

All in all I continue to think that rather than hacking ourselves
around the given situation it would be much better to avoid getting
into that situation in the first place. Perhaps by having Dom0
declare one or more address ranges as non-MMIO (after all, taking
into account your concern of needing an extra hypercall, there's no
need for Dom0 to do this on a page by page basis; it ought to know
where the address ranges suitable for putting foreign or grant
mappings are).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.