[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page


  • To: Penny Zheng <penny.zheng@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 10 Sep 2021 13:08:16 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=U6PaK6ivapCMzL9OJchzgrN1fvV7qgqf9f/WPfBEGH0=; b=GZocYwOQqKX75sI8g9iC0i2RgsxxbY0SnvlK3WQUYndhXWpuBrN5RwNU27cN3FUjEEkjm1yToAkK0vInf7WN3BwP2RaU0v9bc5VReDWz3VpUKyb3yd+fQGkzAV3D91tZnNUqw1bePjsTdRJO/hsCcr2GFiGcuRd8G7xpDwx6CklLadZlqLrWNSijadyneR7bp9q8aw2BJz23LtlL9amq2sFVxhA+2dJSSjsemjrnVBPrGGMm64CDvsQdZTdsNiOMbcb58q58CmDNt5HTpOVX6JF4mfoK0p0MN5IQRpjZW2Djn2CelgNM4lgyxebDY7ec2WZ+7CHToO2qzymWlDZqGw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CsBb6ve44klN8xjX5VmDG/WLTZSiQutOlvwYkjdHLMV2Bdggg+5k4Zb7MHnh0sj5sec7VijkXNyUp5lp/pl+L4R15nO717p0xsg1HCEm90IUr2mi7KuuCHxWnDso7nowHUA3oLw7/+OjPT6McjTBukP3MgQ9Yo0jKDmYd8Z75yyK9EYZESYD2gyeAykpLKO9/rLgYdbwtQJpKanixUQK6Etr7xmH2MiizDEv4Rxku/VjDwJlwx3Eb1WvqxYRftNpNbZe8jdWy2QV9SjSp4U8b8gorA2h1hu5H6+jflHUwqfxT0NKaSDURVswBKVD4U51g4kKwuluWTQwpz5Q2O+7kg==
  • Authentication-results: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
  • Cc: Bertrand.Marquis@xxxxxxx, Wei.Chen@xxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, sstabellini@xxxxxxxxxx, julien@xxxxxxx
  • Delivery-date: Fri, 10 Sep 2021 11:08:33 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 10.09.2021 04:52, Penny Zheng wrote:
> In order to deal with the trouble of count-to-order conversion when page 
> number
> is not in a power-of-two, this commit re-define assign_pages for nr pages and
> assign_page for original page with a single order.
> 
> Backporting confusion could be helped by altering the order of assign_pages
> parameters, such that the compiler would point out that adjustments at call
> sites are needed.

Thanks, this now takes care of my primary concern. However, I (now?) have
another (I thought I would have mentioned this before):

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2259,9 +2259,9 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
>  
>  
>  int assign_pages(
> -    struct domain *d,
>      struct page_info *pg,
> -    unsigned int order,
> +    unsigned long nr,

If this really is to be "unsigned long" (and not "unsigned int"), then...

> @@ -2281,7 +2281,7 @@ int assign_pages(
>      {
>          unsigned int extra_pages = 0;
>  
> -        for ( i = 0; i < (1ul << order); i++ )
> +        for ( i = 0; i < nr; i++ )

... you will want to (a) show that there is a need for this in the
remaining patches of this series and (b) prove that despite the
remaining patches potentially using this, albeit at boot/init time
only, there isn't any problem from ending up with 4 billion (or
more) iteration loops that would then result. On x86 at least I'm
sure this would be an issue.

Otherwise I would request that in the subsequent patches requests
be suitably broken up, with process_pending_softirqs() invoked
in between. Which would get me back to my feeling that the original
assign_pages() is quite good enough, as your new code would need to
split requests now anyway (and to avoid the need for that splitting
was the primary argument in favor of the change here).

> @@ -2290,18 +2290,18 @@ int assign_pages(
>  
>          ASSERT(!extra_pages ||
>                 ((memflags & MEMF_no_refcount) &&
> -                extra_pages == 1u << order));
> +                extra_pages == nr));
>      }
>  #endif
>  
>      if ( pg[0].count_info & PGC_extra )
>      {
> -        d->extra_pages += 1u << order;
> +        d->extra_pages += nr;
>          memflags &= ~MEMF_no_refcount;
>      }
>      else if ( !(memflags & MEMF_no_refcount) )
>      {
> -        unsigned int tot_pages = domain_tot_pages(d) + (1 << order);
> +        unsigned int tot_pages = domain_tot_pages(d) + nr;
>  
>          if ( unlikely(tot_pages > d->max_pages) )
>          {
> @@ -2313,10 +2313,10 @@ int assign_pages(
>      }
>  
>      if ( !(memflags & MEMF_no_refcount) &&
> -         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
> +         unlikely(domain_adjust_tot_pages(d, nr) == nr) )
>          get_knownalive_domain(d);
>  
> -    for ( i = 0; i < (1 << order); i++ )
> +    for ( i = 0; i < nr; i++ )
>      {
>          ASSERT(page_get_owner(&pg[i]) == NULL);
>          page_set_owner(&pg[i], d);
> @@ -2331,6 +2331,11 @@ int assign_pages(
>      return rc;
>  }
>  
> +int assign_page(struct page_info *pg, unsigned int order, struct domain *d,
> +                unsigned int memflags)
> +{
> +    return assign_pages(pg, 1UL << order, d, memflags);
> +}
>  
>  struct page_info *alloc_domheap_pages(
>      struct domain *d, unsigned int order, unsigned int memflags)
> @@ -2373,7 +2378,7 @@ struct page_info *alloc_domheap_pages(
>                  pg[i].count_info = PGC_extra;
>              }
>          }
> -        if ( assign_pages(d, pg, order, memflags) )
> +        if ( assign_page(pg, order, d, memflags) )

Note that here, for example, order is necessarily no larger than
MAX_ORDER.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.