[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] libxc: introduce a per architecture scratch pfn for temporary grant mapping
Hi, Would this patch be suitable as a temporary solution (i.e until a better approach is taken) in the tree? I plan to resend a v2 with Ian's change requested. Regards, On 13/01/15 20:10, Julien Grall wrote: > The code to initialize the grant table in libxc uses > xc_domain_maximum_gpfn() + 1 to get a guest pfn for mapping the grant > frame and to initialize it. > > This solution has two major issues: > - The check of the return of xc_domain_maximum_gpfn is buggy because > xen_pfn_t is unsigned and in case of an error -ERRNO is returned. > Which is never catch with ( pfn <= 0 ). > - The guest memory layout maybe filled up to the end, i.e > xc_domain_maximum_gpfn() + 1 gives either 0 or an invalid PFN due to > hardware limitation. > > Futhermore, on ARM, xc_domain_maximum_gpfn() is not implemented and > return -ENOSYS. This will make libxc to use always the same PFN which > may colapse with an already mapped region (see xen/include/public/arch-arm.h > for the layout). > > This patch only address the problem for ARM, the x86 version use the same > behavior (ie xc_domain_maximum_gpfn() + 1), as I'm not familiar with Xen x86. > > A new function xc_core_arch_get_scratch_gpfn is introduced to be able to > choose the gpfn per architecture. > > For the ARM version, we use the GUEST_GNTTAB_GUEST which is the base of > the region by the guest to map the grant table. At the build time, > nothing is mapped there. > > At the same time correctly check the return of xc_domain_maximum_gpfn > for x86. > > Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> > Cc: Jan Beulich <jbeulich@xxxxxxxx> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > Cc: Roger Pau Monnà <roger.pau@xxxxxxxxxx> > Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> > Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> > Cc: Ian Campbell <ian.campbell@xxxxxxxxxx> > Cc: Wei Liu <wei.liu2@xxxxxxxxxx> > > --- > I chose to take this appproach after the discussion on implementing > XENMEM_maximum_gpfn on ARM (https://patches.linaro.org/32894/). > > This patch has only been built tested on x86 and the same behavior > has been kept (i.e xc_domain_maximum_gpfn() + 1). I would be happy > if someone for x86 world is looking for a possible solution. > --- > tools/libxc/xc_core.h | 3 +++ > tools/libxc/xc_core_arm.c | 17 +++++++++++++++++ > tools/libxc/xc_core_x86.c | 17 +++++++++++++++++ > tools/libxc/xc_dom_boot.c | 18 ++++++++++++------ > 4 files changed, 49 insertions(+), 6 deletions(-) > > diff --git a/tools/libxc/xc_core.h b/tools/libxc/xc_core.h > index 10cbfca..5867030 100644 > --- a/tools/libxc/xc_core.h > +++ b/tools/libxc/xc_core.h > @@ -148,6 +148,9 @@ int xc_core_arch_map_p2m_writable(xc_interface *xch, > unsigned int guest_width, > shared_info_any_t *live_shinfo, > xen_pfn_t **live_p2m, unsigned long *pfnp); > > +int xc_core_arch_get_scratch_gpfn(xc_interface *xch, domid_t domid, > + xen_pfn_t *gpfn); > + > > #if defined (__i386__) || defined (__x86_64__) > # include "xc_core_x86.h" > diff --git a/tools/libxc/xc_core_arm.c b/tools/libxc/xc_core_arm.c > index 2fbcf3f..4c34191 100644 > --- a/tools/libxc/xc_core_arm.c > +++ b/tools/libxc/xc_core_arm.c > @@ -96,6 +96,23 @@ xc_core_arch_map_p2m_writable(xc_interface *xch, unsigned > int guest_width, xc_do > return xc_core_arch_map_p2m_rw(xch, dinfo, info, > live_shinfo, live_p2m, pfnp, 1); > } > + > +int > +xc_core_arch_get_scratch_gpfn(xc_interface *xch, domid_t domid, > + xen_pfn_t *gpfn) > +{ > + /* > + * The Grant Table region space is not used until the guest is > + * booting. Use the first page for the scrach pfn. > + */ > + XC_BUILD_BUG_ON(GUEST_GNTTAB_SIZE < XC_PAGE_SIZE); > + > + *gpfn = GUEST_GNTTAB_BASE >> XC_PAGE_SHIFT; > + > + return 0; > +} > + > + > /* > * Local variables: > * mode: C > diff --git a/tools/libxc/xc_core_x86.c b/tools/libxc/xc_core_x86.c > index f05060a..b157d85 100644 > --- a/tools/libxc/xc_core_x86.c > +++ b/tools/libxc/xc_core_x86.c > @@ -205,6 +205,23 @@ xc_core_arch_map_p2m_writable(xc_interface *xch, > unsigned int guest_width, xc_do > return xc_core_arch_map_p2m_rw(xch, dinfo, info, > live_shinfo, live_p2m, pfnp, 1); > } > + > +int > +xc_core_arch_get_scratch_gpfn(xc_interface *xch, domid_t domid, > + xen_pfn_t *gpfn) > +{ > + int rc; > + > + rc = xc_domain_maximum_gpfn(xch, domid); > + > + if ( rc <= 0 ) > + return rc; > + > + *gpfn = rc; > + > + return 0; > +} > + > /* > * Local variables: > * mode: C > diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c > index f0a1c64..a141eb5 100644 > --- a/tools/libxc/xc_dom_boot.c > +++ b/tools/libxc/xc_dom_boot.c > @@ -33,6 +33,7 @@ > > #include "xg_private.h" > #include "xc_dom.h" > +#include "xc_core.h" > #include <xen/hvm/params.h> > #include <xen/grant_table.h> > > @@ -365,7 +366,7 @@ int xc_dom_gnttab_hvm_seed(xc_interface *xch, domid_t > domid, > domid_t xenstore_domid) > { > int rc; > - xen_pfn_t max_gfn; > + xen_pfn_t scratch_gpfn; > struct xen_add_to_physmap xatp = { > .domid = domid, > .space = XENMAPSPACE_grant_table, > @@ -375,16 +376,21 @@ int xc_dom_gnttab_hvm_seed(xc_interface *xch, domid_t > domid, > .domid = domid, > }; > > - max_gfn = xc_domain_maximum_gpfn(xch, domid); > - if ( max_gfn <= 0 ) { > + rc = xc_core_arch_get_scratch_gpfn(xch, domid, &scratch_gpfn); > + if ( rc < 0 ) > + { > xc_dom_panic(xch, XC_INTERNAL_ERROR, > - "%s: failed to get max gfn " > + "%s: failed to get a scratch gfn " > "[errno=%d]\n", > __FUNCTION__, errno); > return -1; > } > - xatp.gpfn = max_gfn + 1; > - xrfp.gpfn = max_gfn + 1; > + xatp.gpfn = scratch_gpfn; > + xrfp.gpfn = scratch_gpfn; > + > + xc_dom_printf(xch, "%s: called, pfn=0x%"PRI_xen_pfn, __FUNCTION__, > + scratch_gpfn); > + > > rc = do_memory_op(xch, XENMEM_add_to_physmap, &xatp, sizeof(xatp)); > if ( rc != 0 ) > -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |