[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages


  • To: Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>
  • From: Penny Zheng <Penny.Zheng@xxxxxxx>
  • Date: Thu, 8 Jul 2021 09:09:49 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HTEVzp+S11eLoOtWJX+xUq1ENEFre6S4/IvyLcFsY7s=; b=MA/Fgb/siog2CfNZMnXhOjdJjHAahgcszwL6Lo51QMfrgBM/WX4eCvI5twxY72iieqT2FvARBxm/JdXbWNPE+GhdLeOoHUM2BNyqwIJi7wcnDBvLdD8vpVgs+MFyL5Wlu/P7ZhEpRyBaiWS9O1AGG51jQQ/K1VbcY5b1tu84onAmNj4PoLJQOOOa+2WP8vBQb5+n7OZ+251Xng2MnXWyZglYTNgD+WImz5nQ7cq5/TNVLrruntZaooluLTtcwdzMTXa44HajWlT95UQPQaggfA4irKI1AXxl75MkHUYi3TUQAlbXil/iqNtnLNB+723LZ2XJT6ywOpdAMOZXhlfscA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=c1xRCf9qrwfzgj1aEWPLeLCAqqsnWGmGxX7I3uhhOxDMRgYEl3Z1WBSAigXomk/VpotZLXeC7c2sULyJPZkxEuEAFR8LXho0MK6v+C2yrR057fR39UmZjeeroggRkoE1eUrn3pf4X7bSwtEQ+1jT4ueOYH83MxNCAKQD7lnmZLBSREX0Wp4DiFco8c0On+PpEgxqqer/BdVAgXgyOBxX9Yf69PGzvw+Zz4uAYldum2m4xyxkiVn/zZYXh0oYeOKK7m9Xukcoc1CuYpO35FwYXauaTLhB+eYjFslEuY+qwTAW9tDQCnd5ZWrFyXuimsuRAAvXrCitLo1iJTmV3WXmSQ==
  • Authentication-results-original: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>
  • Delivery-date: Thu, 08 Jul 2021 09:10:30 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHXW0b5Q8/Ges+XOkqFV81MfBckcqsNDrqAgChoyZCAADlHgIADP/Fg
  • Thread-topic: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: Tuesday, July 6, 2021 2:54 PM
> To: Penny Zheng <Penny.Zheng@xxxxxxx>
> Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; Wei Chen
> <Wei.Chen@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx;
> sstabellini@xxxxxxxxxx; Julien Grall <julien@xxxxxxx>
> Subject: Re: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and
> alloc_domstatic_pages
> 
> On 06.07.2021 07:58, Penny Zheng wrote:
> >> From: Jan Beulich <jbeulich@xxxxxxxx>
> >> Sent: Thursday, June 10, 2021 6:23 PM
> >>
> >> On 07.06.2021 04:43, Penny Zheng wrote:
> >>> --- a/xen/common/page_alloc.c
> >>> +++ b/xen/common/page_alloc.c
> >>> @@ -1065,6 +1065,75 @@ static struct page_info *alloc_heap_pages(
> >>>      return pg;
> >>>  }
> >>>
> >>> +#ifdef CONFIG_STATIC_ALLOCATION
> >>> +/*
> >>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static 
> >>> memory.
> >>> + * It is the equivalent of alloc_heap_pages for static memory  */
> >>> +static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
> >>> +                                               mfn_t smfn,
> >>> +                                               unsigned int
> >>> +memflags) {
> >>> +    bool need_tlbflush = false;
> >>> +    uint32_t tlbflush_timestamp = 0;
> >>> +    unsigned long i;
> >>> +    struct page_info *pg;
> >>> +
> >>> +    /* For now, it only supports allocating at specified address. */
> >>> +    if ( !mfn_valid(smfn) || !nr_mfns )
> >>> +    {
> >>> +        printk(XENLOG_ERR
> >>> +               "Invalid %lu static memory starting at
> >>> + %"PRI_mfn"\n",
> >>
> >> Reading a log containing e.g. "Invalid 0 static memory starting at
> >> ..." I don't think I would recognize that the "0" is the count of pages.
> >
> > Sure. How about "try to allocate out of range page %"PRI_mfn"\n"?
> 
> This still doesn't convey _both_ parts of the if() that would cause the log
> message to be issued.
> 

Sorry. How about
"
        printk(XENLOG_ERR
               "Either out-of-range static memory starting at %"PRI_mfn""
               "or invalid number of pages: %ul.\n",
               mfn_x(smfn), nr_mfns);
"

> >>> +               nr_mfns, mfn_x(smfn));
> >>> +        return NULL;
> >>> +    }
> >>> +    pg = mfn_to_page(smfn);
> >>> +
> >>> +    for ( i = 0; i < nr_mfns; i++ )
> >>> +    {
> >>> +        /*
> >>> +         * Reference count must continuously be zero for free pages
> >>> +         * of static memory(PGC_reserved).
> >>> +         */
> >>> +        ASSERT(pg[i].count_info & PGC_reserved);
> >>
> >> What logic elsewhere guarantees that this will hold? ASSERT()s are to
> >> verify that assumptions are met. But I don't think you can sensibly
> >> assume the caller knows the range is reserved (and free), or else you
> >> could get away without any allocation function.
> >
> > The caller shall only call alloc_staticmem_pages when it knows range
> > is reserved, like, alloc_staticmem_pages is only called in the context
> > of alloc_domstatic_pages for now.
> 
> If the caller knows the static ranges, this isn't really "allocation".
> I.e. I then question the need for "allocating" in the first place.
>
> >>> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> >>> +        {
> >>> +            printk(XENLOG_ERR
> >>> +                   "Reference count must continuously be zero for free 
> >>> pages"
> >>> +                   "pg[%lu] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> >>> +                   i, mfn_x(page_to_mfn(pg + i)),
> >>> +                   pg[i].count_info, pg[i].tlbflush_timestamp);
> >>> +            BUG();
> >>> +        }
> >>
> >> The same applies here at least until proper locking gets added, which
> >> I guess is happening in the next patch when really it would need to happen
> right here.
> >>
> >
> > Ok, I will combine two commits together, and add locking here.
> > I thought the content of this commit is a little bit too much, so
> > maybe adding the proper lock shall be created as a new patch. 😉
> >
> >> Furthermore I don't see why you don't fold ASSERT() and if into
> >>
> >>         if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
> >>
> >> After all PGC_reserved is not similar to PGC_need_scrub, which
> >> alloc_heap_pages() masks out the way you also have it here.
> >>
> >
> > I understand that you prefer if condition is phrased as follows:
> >     if ( pg[i].count_info != (PGC_state_free | PGC_reserved) ) Agree
> > that PGC_reserved shall has the same position as PGC_state_free.
> > Hmmm, for why I don't fold ASSERT(), do you mean that
> > ASSERT(pg[i].count_info == (PGC_state_free | PGC_reserved))?
> 
> No. By converting to the suggested construct the ASSERT() disappears by way
> of folding _into_ the if().
> 
> >> As to the printk() - the extra verbosity compared to the original
> >> isn't helpful or necessary imo. The message needs to be
> >> distinguishable from the other one, yes, so it would better mention
> >> "static" in some way. But the prefix you have is too long for my taste (and
> lacks a separating blank anyway).
> >>
> >
> > If you don't like the extra verbosity, maybe just " Static pg[%lu] MFN
> > %"PRI_mfn" c=%#lx t=%#x.\n"?
> 
> Something along these lines, yes, but I wonder how difficult it is to take the
> original message and insert "static" at a suitable place.
> Any part you omit would again want justifying. Personally I'd go with "pg[%u]
> static MFN %"PRI_mfn" c=%#lx o=%u v=%#lx t=%#x\n" unless any of the parts
> are provably pointless to log for static pages.
> 
> >>> @@ -2434,6 +2512,57 @@ struct page_info *alloc_domheap_pages(
> >>>      return pg;
> >>>  }
> >>>
> >>> +#ifdef CONFIG_STATIC_ALLOCATION
> >>> +/*
> >>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static
> >>> +memory,
> >>> + * then assign them to one specific domain #d.
> >>> + * It is the equivalent of alloc_domheap_pages for static memory.
> >>> + */
> >>> +struct page_info *alloc_domstatic_pages(
> >>> +        struct domain *d, unsigned long nr_mfns, mfn_t smfn,
> >>> +        unsigned int memflags)
> >>> +{
> >>> +    struct page_info *pg = NULL;
> >>> +    unsigned long dma_size;
> >>> +
> >>> +    ASSERT(!in_irq());
> >>> +
> >>> +    if ( !dma_bitsize )
> >>> +        memflags &= ~MEMF_no_dma;
> >>> +    else
> >>> +    {
> >>> +        if ( (dma_bitsize - PAGE_SHIFT) > 0 )
> >>> +        {
> >>> +            dma_size = 1ul << (dma_bitsize - PAGE_SHIFT);
> >>> +            /* Starting address shall meet the DMA limitation. */
> >>> +            if ( mfn_x(smfn) < dma_size )
> >>> +                return NULL;
> >>
> >> I think I did ask this on v1 already: Why the first page? Static
> >> memory regions, unlike buddy allocator zones, can cross power-of-2
> >> address boundaries. Hence it ought to be the last page that gets
> >> checked for fitting address width restriction requirements.
> >>
> >> And then - is this necessary at all? Shouldn't "pre-defined by
> >> configuration using physical address ranges" imply the memory
> >> designated for a guest fits its DMA needs?
> >>
> >
> > Hmmm, In my understanding, here is the DMA restriction when using buddy
> allocator:
> >     else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi )
> >         pg = alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags,
> > d); dma_zone is restricting the starting buddy allocator zone, so I am
> > thinking that here, it shall restrict the first page.
> >
> > imo, if let user define, it also could be missing DMA restriction?
> 
> Did you read my earlier reply? Again: The difference is that ordinary
> allocations (buddies) can't cross zone boundaries. Hence it is irrelevant if 
> you
> check DMA properties on the first or last page - both will have the same
> number of significant bits. The same is - afaict - not true for static 
> allocation
> ranges.
> 

True.

Ordinary allocations (buddies) can't cross zone boundaries, So I understand that
following the logic in "alloc_heap_pages(dma_zone + 1, zone_hi, order, 
memflags, d);"
pages of the smallest address shall be allocated from "dma_zone + 1", like you
said, it is irrelevant if you check DMA properties on the first or last pages, 
but imo, no matter
first or last page, both shall be larger than (2^(dma_zone + 1)).

Taking 32 as dma_bitsize, then the memory with this DMA restriction allocated by
"alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags, d);" shall be at least
more than 4G.

That’s why I keep comparing the first page of static allocation, that I am 
following the
"more than" logic here.

But you're right, I got a little investigation on ARM DMA limitation, still 
taking dma_bitsize=32
as an example, we want that the actually allocated memory is smaller than 4G, 
not more than.
So I think the logic behind this code line
" alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags, d);" is not right 
for ARM, and it shall
be changed to "alloc_heap_pages(zone_lo, dma_zone + 1, order, memflags, d);" as 
correction.

And Later wei will send a new issue on DMA limitation on ARM to community for 
discussion.

For here, I'll take your suggestion for removing DMA limitation on Static 
Allocation.

> Of course, as expressed before, a question is whether DMA suitability needs
> checking in the first place for static allocations: I'd view it as 
> mis-configuration
> if a domain was provided memory it can't really use properly.
> 
> Jan

Cheers

Penny Zheng

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.