[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 3/8]: PVH startup changes (enlighten.c)



On Wed, 2012-09-26 at 12:33 +0100, Stefano Stabellini wrote:
> On Wed, 26 Sep 2012, Mukesh Rathor wrote:
> > On Tue, 25 Sep 2012 11:03:13 +0100
> > Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> > 
> > > On Mon, 24 Sep 2012, Mukesh Rathor wrote:
> > > > On Mon, 24 Sep 2012 13:07:19 +0100
> > > > Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> > > > 
> > > > > On Fri, 21 Sep 2012, Mukesh Rathor wrote:
> > > > > > +           return;
> > > > > > +
> > > > > >     if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> > > > > >             set_fixmap(FIX_PARAVIRT_BOOTMAP,
> > > > > >                        xen_start_info->shared_info);
> > > > > >
> > > > > > @@ -1044,6 +1058,10 @@ void xen_setup_shared_info(void)
> > > > > >             HYPERVISOR_shared_info =
> > > > > >                     (struct shared_info
> > > > > > *)__va(xen_start_info->shared_info); 
> > > > > > +   /* PVH TBD/FIXME: vcpu info placement in phase 2 */
> > > > > > +   if (xen_pvh_domain())
> > > > > > +           return;
> > > > > 
> > > > > It seems that if xen_initial_domain we always skip the
> > > > > initialization while if !xen_initial_domain we only initialize
> > > > > HYPERVISOR_shared_info. I don't understand why we have this
> > > > > difference.
> > > > 
> > > > The comment in xen_pvh_guest_init() explains it. For domU the
> > > > library maps the pfn at shared_info, ie, shared_info is pfn. For
> > > > dom0, it's the mfn. Dom0 then allocates a pfn via extend_brk, and
> > > > maps the mfn to it. This happens in the commond hvm code,
> > > > xen_hvm_init_shared_info().
> > > 
> > > This difference is really subtle, it would be nice to get rid of it.
> > > Could Xen allocate a pfn for dom0?
> > 
> > Not easily.
> > 
> > > Otherwise could we have the tools allocate an mfn instead of a pfn?
> > > In fact looking at xc_dom_x86.c, alloc_magic_pages is explicitly
> > > having a different behavior for xc_dom_feature_translated guests and
> > > allocates pfn instead of an mfn. Maybe we could get rid of that
> > > special case: less code in libxc, a common way of allocating the
> > > shared_info page for domU and dom0 => win.
> > 
> > Wish it was simple. But for PV and PVH, domU, it's already setup the 
> > shared page. All we need to do is __va(shared_info). But for HVM domUs 
> > and PVH dom0, we need to hcall with pfn to get it remapped.
> 
> For PVH domU is already setup as a pfn only because in
> tools/libxc/xc_dom_x86.c:alloc_magic_pages we have this code:
> 
>     if ( xc_dom_feature_translated(dom) )
>         dom->shared_info_pfn = xc_dom_alloc_page(dom, "shared info");
> 
> and in tools/libxc/xc_dom_x86.c:start_info_x86_64 we have:
> 
>     xen_pfn_t shinfo =
>         xc_dom_feature_translated(dom) ? dom->shared_info_pfn : dom->
>         shared_info_mfn;
> 
> if we simply get rid of the two "if xc_dom_feature_translated(dom)"
> wouldn't we get an mfn for PVH domU too? AFAICT all the other cases would
> remain unmodified, but PVH domU would start getting an mfn for shared_info.

The other option would be to have the hypervisor side builder for a PVH
dom0 add an entry to the P2M for the shared info as well. Arguably that
would be more PV like and therefore the correct thing for a PVH guest
since it makes the shared info immediately available to the dom0.

> > Changing the
> > tool to map pfn, would result in unnecessary hcall for all PV and PVH
> > domUs. It's only two lines of code, so lets just leave it. I'll make the
> > comment better.
> 
> Yes, there would be one more unnecessary hypercall but we would get rid
> of 4 "if". I think is a good trade off.

Certainly minimising the numbers of hypercalls is not the most important
thing in a code path which is run once at start of day.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.