[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 4/4] libxc/arm: allocate xenstore and console pages



On Fri, 2012-07-13 at 11:32 -0400, Stefano Stabellini wrote:
> On Thu, 12 Jul 2012, Tim Deegan wrote:
> > At 11:55 +0100 on 04 Jul (1341402949), Stefano Stabellini wrote:
> > >  static int alloc_magic_pages(struct xc_dom_image *dom)
> > >  {
> > > +    int rc, i, allocsz;
> > > +    xen_pfn_t store_pfn, console_pfn, p2m[NR_MAGIC_PAGES];
> > > +
> > >      DOMPRINTF_CALLED(dom->xch);
> > > -    /* XXX
> > > -     *   dom->p2m_guest
> > > -     *   dom->start_info_pfn
> > > -     *   dom->xenstore_pfn
> > > -     *   dom->console_pfn
> > > -     */
> > > +
> > > +    for (i = 0; i < NR_MAGIC_PAGES; i++)
> > > +        p2m[i] = dom->rambase_pfn + dom->total_pages + i;
> > > +
> > > +    for ( i = rc = allocsz = 0;
> > > +          (i < NR_MAGIC_PAGES) && !rc;
> > > +          i += allocsz) {
> > > +        allocsz = NR_MAGIC_PAGES - i;
> > > +        rc = xc_domain_populate_physmap_exact(
> > > +                dom->xch, dom->guest_domid, allocsz,
> > > +                0, 0, &p2m[i]);
> > > +    }
> > 
> > What does this loop do?  It seems like it can only ever execute once.
> 
> I think that you are right.
> In that case also the same loop in arch_setup_meminit in Ian's patch
> should probably be removed:
> 
> http://marc.info/?l=xen-devel&m=134089793916569

In that loop it can make only partial progress in an iteration (due to
the "if (alloc_size > 1024*1024)" clamping the allocation size. In that
case we will need to go round the loop multiple times.

In your case you always allocate exactly enough (NR_MAGIC_PAGES - i) the
first time round the loop. Since NR_MAGIC_PAGES is small you don't need
to do the allocations in chunks.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.