[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] xen: arm: support up to (almost) 1TB of guest RAM



On Tue, 2014-04-08 at 16:35 +0100, Julien Grall wrote:
> Hi Ian,
> 
> On 04/08/2014 03:19 PM, Ian Campbell wrote:
> > This creates a second bank of RAM starting at 8GB and potentially extending 
> > to
> > the 1TB boundary, which is the limit imposed by our current use of a 3 level
> > p2m with 2 pages at level 0 (2^40 bits).
> > 
> > I've deliberately left a gap between the two banks just to exercise those 
> > code paths.
> > 
> > The second bank is 1016GB in size which plus the 3GB below 4GB is 1019GB
> > maximum guest RAM. At the point where the fact that this is slightly less 
> > than
> > a full TB starts to become an issue for people then we can switch to a 4 
> > level
> > p2m, which would be needed to support guests larger than 1TB anyhow.
> > 
> > Tested on 32-bit with 1, 4 and 6GB guests. Anything more than ~3GB requires 
> > an
> > LPAE enabled kernel, or a 64-bit guest.
> 
> What happen if you try to boot a non-LPAE guest with more than ~3GB?
> Does it boot or crash?

It logs a message in dmesg about either truncating or ignoring banks and
then boots with the lesser amount of ram. Logically it has too so you
can install on native using a distro kernel even on a big box and have
the installed choose the LPAE kernel for you.

> > +        return 0;
> > +
> > +    DOMPRINTF("%s: populating RAM @ %016"PRIx64"-%016"PRIx64" 
> > (%"PRId64"MB)",
> > +              __FUNCTION__,
> > +              (uint64_t)base_pfn << XC_PAGE_SHIFT,
> > +              (uint64_t)(base_pfn + nr_pfns) << XC_PAGE_SHIFT,
> > +              (uint64_t)nr_pfns >> (20-XC_PAGE_SHIFT));
> > +
> > +    for ( pfn = 0; pfn < nr_pfns; pfn++ )
> > +        dom->p2m_host[pfn] = base_pfn + pfn;
> > +
> > +    for ( pfn = rc = allocsz = 0; (pfn < nr_pfns) && !rc; pfn += allocsz )
> 
> It's very confusing to see pfn = rc = allocsz = 0 and each are not
> related. Any reason to not initialized rc/allocsz earlier?

This was mainly code motion, the original loop was the same.  I can try
and clean it up a bit though.

> > +    {
> > +        allocsz = nr_pfns - pfn;
> 
> In fact you are using allocsz here... so you don't need to initialize it.

I'll see if the compiler complains about the use in the for loop
increment.

> > +    assert(dom->rambase_pfn << XC_PAGE_SHIFT == GUEST_RAM0_BASE);
> 
> Can you add parenthesis here for readability?

OK.

> 
> [..]
> 
> > diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> > index 4f0f0e2..6170454 100644
> > --- a/tools/libxl/libxl_arm.c
> > +++ b/tools/libxl/libxl_arm.c
> > @@ -255,9 +255,9 @@ static int make_psci_node(libxl__gc *gc, void *fdt)
> >      return 0;
> >  }
> >  
> > -static int make_memory_node(libxl__gc *gc, void *fdt,
> > -                            unsigned long long base,
> > -                            unsigned long long size)> 
> > +static int make_one_memory_node(libxl__gc *gc, void *fdt,
> > +                                unsigned long long base,
> > +                                unsigned long long size)
> 
> I would use uint64_t rather unsigned long long. It's more clear the
> base/size are 64 bits.

I will add a patch to change all the unsigned long long's in this file,
of which there are several others.

> > +{
> > +    int res;
> > +    /* This had better match libxc's arch_setup_meminit... */
> > +    const uint64_t size0 = size > GUEST_RAM0_SIZE ? GUEST_RAM0_SIZE : size;
> > +    const uint64_t size1 = size > GUEST_RAM0_SIZE ? size - size0 : 0;
> > +
> > +    res = make_one_memory_node(gc, fdt, GUEST_RAM0_BASE, size0);
> > +    if (res) return res;
> > +    if (size1) {
> > +        res = make_one_memory_node(gc, fdt, GUEST_RAM1_BASE, size1);
> > +        if (res) return res;
> > +    }
> 
> Any reason to not gather the both bank in one memory node?

This is simpler to code and is equally valid DT.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.