[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 2/3] xen/mm: allow deferred scrub of physmap populate allocated pages


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 27 Jan 2026 16:49:10 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 27 Jan 2026 15:49:16 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 27.01.2026 16:01, Roger Pau Monné wrote:
> On Tue, Jan 27, 2026 at 12:06:32PM +0100, Jan Beulich wrote:
>> On 27.01.2026 11:40, Roger Pau Monné wrote:
>>> On Mon, Jan 26, 2026 at 12:14:35PM +0100, Jan Beulich wrote:
>>>> On 22.01.2026 18:38, Roger Pau Monne wrote:
>>>>> --- a/xen/common/memory.c
>>>>> +++ b/xen/common/memory.c
>>>>> @@ -159,6 +159,66 @@ static void increase_reservation(struct memop_args 
>>>>> *a)
>>>>>      a->nr_done = i;
>>>>>  }
>>>>>  
>>>>> +/*
>>>>> + * Temporary storage for a domain assigned page that's not been fully 
>>>>> scrubbed.
>>>>> + * Stored pages must be domheap ones.
>>>>> + *
>>>>> + * The stashed page can be freed at any time by Xen, the caller must 
>>>>> pass the
>>>>> + * order and NUMA node requirement to the fetch function to ensure the
>>>>> + * currently stashed page matches it's requirements.
>>>>> + */
>>>>> +static void stash_allocation(struct domain *d, struct page_info *page,
>>>>> +                             unsigned int order, unsigned int 
>>>>> scrub_index)
>>>>> +{
>>>>> +    rspin_lock(&d->page_alloc_lock);
>>>>> +
>>>>> +    /*
>>>>> +     * Drop any stashed allocation to accommodated the current one.  This
>>>>> +     * interface is designed to be used for single-threaded domain 
>>>>> creation.
>>>>> +     */
>>>>> +    if ( d->pending_scrub )
>>>>> +        free_domheap_pages(d->pending_scrub, d->pending_scrub_order);
>>>>
>>>> Didn't you indicate you'd move the freeing ...
>>>>
>>>>> +    d->pending_scrub_index = scrub_index;
>>>>> +    d->pending_scrub_order = order;
>>>>> +    d->pending_scrub = page;
>>>>> +
>>>>> +    rspin_unlock(&d->page_alloc_lock);
>>>>> +}
>>>>> +
>>>>> +static struct page_info *get_stashed_allocation(struct domain *d,
>>>>> +                                                unsigned int order,
>>>>> +                                                nodeid_t node,
>>>>> +                                                unsigned int 
>>>>> *scrub_index)
>>>>> +{
>>>>
>>>> ... into this function?
>>>
>>> I could add freeing to get_stashed_allocation(), but it seems
>>> pointless, because the freeing in stash_allocation() will have to stay
>>> to deal with concurrent callers.  Even if a context frees the stashed
>>> page in get_stashed_allocation() there's no guarantee the field will
>>> still be free when stash_allocation() is called, as another concurrent
>>> thread might have stashed a page in the meantime.
>>
>> Hmm, yes, yet still ...
>>
>>> I think it's best to consistently do it only in stash_allocation(), as
>>> that's clearer.
>>
>> ... no, as (to me) "clearer" is only a secondary criteria here. What I'm
>> worried of is potentially holding back a 1Gb page when the new request is,
>> say, a 2Mb one, and then not having enough memory available just because
>> of that detained huge page.
> 
> If that's really the case then either the caller is using a broken
> toolstack that's making bogus populate physmap calls, or the caller is
> attempting to populate the physmap in parallel and hasn't properly
> checked whether there's enough free memory in the system.  In the
> later case the physmap population would end up failing anyway.
> 
>> In fact, if stash_allocation() finds the field re-populated despite
>> get_stashed_allocation() having cleared it, it's not quite clear which
>> of the two allocations should actually be undone. The other vCPU may be
>> quicker in retrying, and to avoid ping-pong freeing the new (local)
>> allocation rather than stashing it might possibly be better. Thoughts?
> 
> TBH I didn't give it much thought, as in any case progression when
> attempting to populate the physmap in parallel will be far from
> optimal.  If you prefer I can switch to the approach where the freeing
> of the stashed page is done in get_stashed_allocation() and
> stash_allocation() instead frees the current one if it find the field
> is already in use.

I'd prefer that, yes. Of course if others were to agree with your take ...

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.