[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 2/3] xen/mm: allow deferred scrub of physmap populate allocated pages


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 28 Jan 2026 20:06:34 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uoeTR8czM06YUkBHxd75Edi46eWfXm2/ps5tFJAOOnE=; b=Z2dL/s2QzMQ+CjxEE4k0mmm4tTW9M3SJyBGhdx86PSW/KfdU7NL638FtV0RbwyXjPUMAA1GT4TI+jSnmNbKw2natCKetWflRHxflvJjqFlo1MgUYW88R/bJ4BTRXKZFL9aaK8iUGXRU/bO3wo+HsxDdVu/h9g2WKs2IHfpzD/ME6jBBbnbkA9dE4kzaTGi0EYRNmGrvY5pyu6JoMF6RvGQoTYuPnQlvp3BmGARI9lHUX2gHBoA2u0i4yL4TIS6UdpmXBNL6xoL7BvtrMhselsbQOZ+DWYmNmmxwlhVBrG7hAv4UnRw3eqGK+pvydF0HEnbMX65nS2iHb1GE+4KUZIw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ahL5/JMhe/u0moxpjoJTraJqVSi+k2NK0ZLZea07gzpkoU8suzvyd7XD3k9GeZybloY3CM+TukMEuBlYUQO+LiZNocK6nVKiG5P1kush424GUQ6lAUF5+BLRRhebubfVCzdrxmV7xvfQiR5kLmURYAEs/EVmKCPEe+04VydUmG3LH9TzEDECdJxeXj1q7zUqrzqJzleyGiRaB30MpayonXKlIbP8xqrVl66H2ypxgURNpXjkMAsa4kiR1s1d4U+sm8+f5+tJg8pSTNcF7GhDhzdSNVp0UlYJ36M7gxmTMy5mX4UPp2HbRoOrWgEhXl3FLoejA/mBQpncw8LrBhFpUg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 28 Jan 2026 19:07:13 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Jan 28, 2026 at 03:46:04PM +0100, Jan Beulich wrote:
> On 28.01.2026 13:03, Roger Pau Monne wrote:
> > @@ -275,7 +339,18 @@ static void populate_physmap(struct memop_args *a)
> >              }
> >              else
> >              {
> > -                page = alloc_domheap_pages(d, a->extent_order, 
> > a->memflags);
> > +                unsigned int scrub_start = 0;
> > +                nodeid_t node =
> > +                    (a->memflags & MEMF_exact_node) ? 
> > MEMF_get_node(a->memflags)
> > +                                                    : NUMA_NO_NODE;
> > +
> > +                page = get_stashed_allocation(d, a->extent_order, node,
> > +                                              &scrub_start);
> > +
> > +                if ( !page )
> > +                    page = alloc_domheap_pages(d, a->extent_order,
> > +                        a->memflags | (d->creation_finished ? 0
> > +                                                            : 
> > MEMF_no_scrub));
> 
> I fear there's a more basic issue here that so far we didn't pay attention to:
> alloc_domheap_pages() is what invokes assign_page(), which in turn resets
> ->count_info for each of the pages. This reset includes setting PGC_allocated,
> which ...
> 
> > @@ -286,6 +361,30 @@ static void populate_physmap(struct memop_args *a)
> >                      goto out;
> >                  }
> >  
> > +                if ( !d->creation_finished )
> > +                {
> > +                    unsigned int dirty_cnt = 0;
> > +
> > +                    /* Check if there's anything to scrub. */
> > +                    for ( j = scrub_start; j < (1U << a->extent_order); 
> > j++ )
> > +                    {
> > +                        if ( !test_and_clear_bit(_PGC_need_scrub,
> > +                                                 &page[j].count_info) )
> > +                            continue;
> 
> ... means we will now scrub every page in the block, not just those which 
> weren't
> scrubbed yet, and we end up clearing PGC_allocated. All because of 
> PGC_need_scrub
> aliasing PGC_allocated. I wonder how this didn't end up screwing any testing 
> you
> surely will have done. Or maybe I'm completely off here?

Thanks for spotting this!  No, I didn't see any issues.  I don't see
any check for PGC_allocated in free_domheap_pages(), which could
explain the lack of failures?

I will have to allocate with MEMF_no_owner and then do the
assign_pages() call from populate_physmap() after the scrubbing is
done.  Maybe that would work.  Memory allocated using MEMF_no_owner
still consumes the claim pool if a domain parameter is passed to
alloc_heap_pages().

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.