[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 2/3] xen/mm: allow deferred scrub of physmap populate allocated pages


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 29 Jan 2026 11:52:48 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2hgqdeqKcVvh9KCWygOPUP0xc+0JkV8lV4SQ9b0lPlE=; b=KlO3mwskS6v6iRuhzjKLnOat6WYfKX+zLFODTXxvMsV0hnr3S6SQOyb7Ai3D7q2wHVAnuc9CKkYmgkN7xIEC+l5Nr+Q6hCyYB5JPJ9zEOWv8U5OklDee29EoMWbskjuMe/WTb1Cf5fhn35yE3+es8vB4k5HKT5q9goTAPvp+/4EFhwCL4wdZu4+LJPTm5Z2Y+nR4Vh7WellJK/tNHK5dqGTrv8lrs+M7pyr2tK6fgYDCD074sbFXXqKe4Bi4eDmz7UlX9p/g+s+HiWNnjrQIL2MmHU9WidScsLJPhb0sEVeWj46aWSNMfE2VK0PRG1uidXnd6YA5hJD0IOXJr1bLUg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=dnjRUA08pKrPG0QsbHQn1Gy5dbymAdpy9eO8ree8T4+Xvy5uk46rbb2NWsoSvc7DcBOUAvRyTWEavk7m2jw775Mm5edle+mC/dtw6pIcNW8sqXkWQCeWi56NWXbrmO6mS/5X0YVr+KITx0FLbmB8eHGFwaS8T9opd8jFIslFzEZMJ8P6t2PDZ65ev5fmYpngQoeuHYQRdUY91Y4l9vM16yjZ/DAqmTvTxd7WM/t4T/ikNWD0sAjpZKx3yYJ5XmldxIWb0AYHUpvttf4JDl9DdK7GofZjdgV/uhVTh1+/gYAvXP+TNNYCqjbjmJXCEC+mOxRWRxtAD0KEeU4HD99LtQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 29 Jan 2026 10:53:01 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Jan 29, 2026 at 08:53:05AM +0100, Jan Beulich wrote:
> On 28.01.2026 20:06, Roger Pau Monné wrote:
> > On Wed, Jan 28, 2026 at 03:46:04PM +0100, Jan Beulich wrote:
> >> On 28.01.2026 13:03, Roger Pau Monne wrote:
> >>> @@ -275,7 +339,18 @@ static void populate_physmap(struct memop_args *a)
> >>>              }
> >>>              else
> >>>              {
> >>> -                page = alloc_domheap_pages(d, a->extent_order, 
> >>> a->memflags);
> >>> +                unsigned int scrub_start = 0;
> >>> +                nodeid_t node =
> >>> +                    (a->memflags & MEMF_exact_node) ? 
> >>> MEMF_get_node(a->memflags)
> >>> +                                                    : NUMA_NO_NODE;
> >>> +
> >>> +                page = get_stashed_allocation(d, a->extent_order, node,
> >>> +                                              &scrub_start);
> >>> +
> >>> +                if ( !page )
> >>> +                    page = alloc_domheap_pages(d, a->extent_order,
> >>> +                        a->memflags | (d->creation_finished ? 0
> >>> +                                                            : 
> >>> MEMF_no_scrub));
> >>
> >> I fear there's a more basic issue here that so far we didn't pay attention 
> >> to:
> >> alloc_domheap_pages() is what invokes assign_page(), which in turn resets
> >> ->count_info for each of the pages. This reset includes setting 
> >> PGC_allocated,
> >> which ...
> >>
> >>> @@ -286,6 +361,30 @@ static void populate_physmap(struct memop_args *a)
> >>>                      goto out;
> >>>                  }
> >>>  
> >>> +                if ( !d->creation_finished )
> >>> +                {
> >>> +                    unsigned int dirty_cnt = 0;
> >>> +
> >>> +                    /* Check if there's anything to scrub. */
> >>> +                    for ( j = scrub_start; j < (1U << a->extent_order); 
> >>> j++ )
> >>> +                    {
> >>> +                        if ( !test_and_clear_bit(_PGC_need_scrub,
> >>> +                                                 &page[j].count_info) )
> >>> +                            continue;
> >>
> >> ... means we will now scrub every page in the block, not just those which 
> >> weren't
> >> scrubbed yet, and we end up clearing PGC_allocated. All because of 
> >> PGC_need_scrub
> >> aliasing PGC_allocated. I wonder how this didn't end up screwing any 
> >> testing you
> >> surely will have done. Or maybe I'm completely off here?
> > 
> > Thanks for spotting this!  No, I didn't see any issues.  I don't see
> > any check for PGC_allocated in free_domheap_pages(), which could
> > explain the lack of failures?
> 
> Maybe. PGC_allocated consumes a page ref, so I would have expected accounting
> issues.
> 
> > I will have to allocate with MEMF_no_owner and then do the
> > assign_pages() call from populate_physmap() after the scrubbing is
> > done.  Maybe that would work.  Memory allocated using MEMF_no_owner
> > still consumes the claim pool if a domain parameter is passed to
> > alloc_heap_pages().
> 
> Technically this looks like it might work, but it's feeling as if this was
> getting increasingly fragile. I'm also not quite sure whether MEMF_no_owner
> allocations should consume claimed pages. Imo there are arguments both in
> favor and against such behavior.
> 
> We may want to explore the alternative of un-aliasing the two PGC_*.

I expected the PGC_ bits would be all consumed, but I see there's a
range that are still empty, so it might indeed be easier to remove the
alias.  Let me give that a try.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.