[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/3] xen/mm: allow deferred scrub of physmap populate allocated pages


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 22 Jan 2026 14:00:24 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 22 Jan 2026 13:00:37 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 22.01.2026 13:48, Roger Pau Monné wrote:
> On Mon, Jan 19, 2026 at 02:00:49PM +0100, Jan Beulich wrote:
>> On 15.01.2026 12:18, Roger Pau Monne wrote:
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -722,6 +722,15 @@ static void _domain_destroy(struct domain *d)
>>>  
>>>      XVFREE(d->console);
>>>  
>>> +    if ( d->pending_scrub )
>>> +    {
>>> +        BUG_ON(d->creation_finished);
>>> +        free_domheap_pages(d->pending_scrub, d->pending_scrub_order);
>>> +        d->pending_scrub = NULL;
>>> +        d->pending_scrub_order = 0;
>>> +        d->pending_scrub_index = 0;
>>> +    }
>>
>> Because of the other zeroing wanted (it's not strictly needed, is it?),
>> it may be a little awkward to use FREE_DOMHEAP_PAGES() here. Yet I would
>> still have recommended to avoid its open-coding, if only we had such a
>> wrapper already.
> 
> I don't mind introducing a FREE_DOMHEAP_PAGES() wrapper in this same
> patch, if you are OK with it.

I'd be fine with that.

>> Would this better be done earlier, in domain_kill(), to avoid needlessly
>> holding back memory that isn't going to be used by this domain anymore?
>> Would require the spinlock be acquired to guard against a racing
>> stash_allocation(), I suppose. In fact freeing right in
>> domain_unpause_by_systemcontroller() might be yet better (albeit without
>> eliminating the need to do it here or in domain_kill()).
> 
> Even with a lock taken moving to domain_kill() would be racy.  A rogue
> toolstack could keep trying to issue populate_physmap hypercalls which
> would fail in the assign_pages() call, but it could still leave
> pending pages in d->pending_scrub, as the assign_pages() call happens
> strictly after the scrubbing is done.

As indicated, the freeing here may need to stay. But making an attempt far
earlier may help the system overall.

>>> +    /*
>>> +     * If there's a pending page to scrub check it satisfies the current
>>> +     * request.  If it doesn't keep it stashed and return NULL.
>>> +     */
>>> +    if ( !d->pending_scrub || d->pending_scrub_order != order ||
>>> +         (node != NUMA_NO_NODE && node != page_to_nid(d->pending_scrub)) )
>>
>> Ah, and MEMF_exact_node is handled in the caller.
>>
>>> +        goto done;
>>> +    else
>>> +    {
>>> +        page = d->pending_scrub;
>>> +        *scrub_index = d->pending_scrub_index;
>>> +    }
>>> +
>>> +    /*
>>> +     * The caller now owns the page, clear stashed information.  Prevent
>>> +     * concurrent usages of get_stashed_allocation() from returning the 
>>> same
>>> +     * page to different contexts.
>>> +     */
>>> +    d->pending_scrub_index = 0;
>>> +    d->pending_scrub_order = 0;
>>> +    d->pending_scrub = NULL;
>>> +
>>> + done:
>>> +    rspin_unlock(&d->page_alloc_lock);
>>> +
>>> +    return page;
>>> +}
>>
>> Hmm, you free the earlier allocation only in stash_allocation(), i.e. that
>> memory isn't available to fulfill the present request. (I do understand
>> that the freeing there can't be dropped, to deal with possible races
>> caused by the toolstack.)
> 
> Since we expect populate_physmap(9 to be executed sequentially by the
> toolstack I would argue it's fine to hold onto that memory.

Here you say "sequentially", just to ...

>  Otherwise
> I could possibly free in get_stashed_allocation() when the request
> doesn't match what's stashed.  I opted for freeing later in
> stash_allocation() to maybe give time for the other parallel caller to
> finish the scrubbing.

... assume non-sequential behavior here. I guess I'm a little confused.
(Yes, freeing right in get_stashed_allocation() is what I'd expect.)

>> The use of "goto" here also looks a little odd, as it would be easy to get
>> away without. Or else I'd like to ask that the "else" be dropped.
> 
> Hm, OK, let me use an unlock + return and also drop the else then.  I
> think that's clearer.

I think if() with the condition inverted and a single unlock+return at
the end would be easiest to follow.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.