[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 5/9] xen: introduce assign_pages_nr


  • To: Julien Grall <julien@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 1 Jul 2021 12:13:24 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JCunioa4EWZjq32fQIVqtUhOCPm1ECOwtCYNIITpjdM=; b=K2TDbsRv91wKJQmTHUEPBTujtMGoDy6Ge+jKHiw5koZfiLTkX409ALE386wOkapLzVNPkh/GI4OV5JReN5csQLANhKNvjWSj+KZH3dmmFdA+QZ4R0YVr1dcEc4AEbYmHbjs+evYR5UHVr/tgAvKD1x71MH+UU2t1WuBXgejIzKEennZs2obG5yvuJOUhx8bWEx/9TXTy/rOclOfj5FhSUsqksb414WBDbnDOOuu8ytxwMXDFOOqVh3wqXyN6kWlFB2OFf8EuLyVk0bq1IKGIIPD1oW4bUqDAHqIRE6yi0SnQ2uIrIUCyu4kQJyaylbuWQZCiUIeauRvSWQZsAjJPjw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h9ytCMHPYpi7BNJt6dukBjOsPTmGUL+qSrRSGOZ4ThlwYz4TjIbazNNEnKYaW+NO8A1qoY4+95gLRkNKgMbhWlLMHxT4RGuKPvZSvUbz1cfQ0yO3+D6CxmIRjtMRlShTayjpgdYFu9utMPobnMgmqda6FcCqV2ypN8fm1neJvHMnQbD1bgdhVR9exrop3zYl3XA8/jS3FaoCkffkIkm0P90qub+n7qPJTa7ejwsEL8vPif7t9+Uzvq344fk7ceRWlLgXudhAopO+Jt9pbxOJDCQjPXgU1yD1dOllXkWctbqx07sZifDDdBLwPfVOkdbUVp0nOnQOWeCXVZ1dlA0mAg==
  • Authentication-results: arm.com; dkim=none (message not signed) header.d=none;arm.com; dmarc=none action=none header.from=suse.com;
  • Cc: Bertrand.Marquis@xxxxxxx, Wei.Chen@xxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, sstabellini@xxxxxxxxxx, Penny Zheng <penny.zheng@xxxxxxx>
  • Delivery-date: Thu, 01 Jul 2021 10:13:42 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 01.07.2021 11:24, Julien Grall wrote:
> On 01/07/2021 09:26, Jan Beulich wrote:
>> On 30.06.2021 20:29, Julien Grall wrote:
>>> On 10/06/2021 10:49, Jan Beulich wrote:
>>>> On 07.06.2021 04:43, Penny Zheng wrote:
>>>>> Introduce new interface assign_pages_nr to deal with when page number is
>>>>> not in a power-of-two, which will save the trouble each time user needs
>>>>> to split the size in a power of 2 to use assign_pages.
>>>>
>>>> First of all I still don't see why in this one special case it is a
>>>> meaningful burden to do the count-to-order conversion in the caller you
>>>> mean to add,
>>>
>>> This sort of works for one caller. However, I would expect some more
>>> user in the future (we use it for Live-Update).
>>>
>>>> and hence why we really need this new function (to keep it
>>>> simple, you could even have the caller not break down to arbitrary
>>>> power-of-2 chunks, but simply iterate over all individual [order-0]
>>>> pages).
>>>
>>> The function assign_pages() will always use 1U << order (and sadly 1 <<
>>> order). So we would end up to convert the count in multiple order for
>>> then directly converting back to a number. To me, this sounds rather
>>> pointless...
>>>
>>> There are also a slight benefits to call assign_pages() a single time
>>> during boot because it will reduce the number of time we need to
>>> lock/unlock d->page_alloc_lock.
>>
>> Well, all of this is why I did add ...
>>
>>>> The more that I'm not happy with the chosen name, despite it
>>>> having been suggested during v1 review. _If_ we needed two functions,
>>>> imo they ought to be named assign_page() (dealing with a single page of
>>>> the given order) and assign_pages(). Backporting confusion could be
>>>> helped by altering the order of parameters, such that the compiler
>>>> would point out that adjustments at call sites are needed.
>>
>> ... this. 
> 
> Oh, it wasn't entirely clear whether you were objecting of offering the 
> possibility to pass a number of pages rather than an order.

Easily understood: I indeed we trying to express my preference to stick
to what we have, but trying to suggest a variant that I think I could
live with.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.