[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V1 PATCH] dom0 pvh: map foreign pfns in our p2m for toolstack



On 24/05/14 03:33, Mukesh Rathor wrote:
> When running as dom0 in pvh mode, foreign pfns that are accessed must be
> added to our p2m which is managed by xen. This is done via
> XENMEM_add_to_physmap_range hypercall. This is needed for toolstack
> building guests and mapping guest memory, xentrace mapping xen pages,
> etc..
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
> ---
>  arch/x86/xen/mmu.c | 115 
> +++++++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 112 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 86e02ea..8efc066 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2510,6 +2510,93 @@ void __init xen_hvm_init_mmu_ops(void)
>  }
>  #endif
>  
> +#ifdef CONFIG_XEN_PVH
> +/*
> + * Map foreign gmfn, fgmfn, to local pfn, lpfn. This for the user space
> + * creating new guest on pvh dom0 and needing to map domU pages.
> + */
> +static int xlate_add_to_p2m(unsigned long lpfn, unsigned long fgmfn,
> +                         unsigned int domid)
> +{
> +     int rc, err = 0;
> +     xen_pfn_t gpfn = lpfn;
> +     xen_ulong_t idx = fgmfn;
> +
> +     struct xen_add_to_physmap_range xatp = {
> +             .domid = DOMID_SELF,
> +             .foreign_domid = domid,
> +             .size = 1,
> +             .space = XENMAPSPACE_gmfn_foreign,
> +     };
> +     set_xen_guest_handle(xatp.idxs, &idx);
> +     set_xen_guest_handle(xatp.gpfns, &gpfn);
> +     set_xen_guest_handle(xatp.errs, &err);
> +
> +     rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, &xatp);
> +     return rc;

Thanks for the patches, I see two problems with this approach, the first
one is that you are completely ignoring the error in the variable "err",
which means that you can end up with a pfn that Linux thinks it's valid,
but it's not mapped to any mfn, so when you try to access it you will
trigger the vioapic crash.

The second one is that this seems extremely inefficient, you are issuing
one hypercall for each memory page, when you could instead batch all the
pages into a single hypercall and map them in one shot.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.