[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC] x86+libxl: correct p2m (shadow) memory pool size calculation


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 22 Apr 2022 13:56:17 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OYHP4Hj8cxj8+b8AVmKuZOBskLoXp/nmnzhsHZ5kjsY=; b=QSryXgmoBkqL57HckJYsXXyXFvI3vbLDrUvEIvRVTM0V793apV5uh3Vo4fmVHsdbxe84X7iaf1+GPPpNeNwUOFWU0gL9hNziRxqsdunu01mfgBGpS1vUi/wAympSwSH7ZGD52X1K/ZKmDQVBxx5LQKVjOsMnz8NTmAUUj8fDUdBEjfVZHJSFTYfEeMVGIsb0WXsFjeX6mKBp0Z0SpS5IUt4TH9MTy7ud84TZjdhMk8nBkP8xOk6O6qA5jVWzXiNOpiuNg+P9GuPSyBNlTxF5vx0tdHdqEav1kGprsxRnW56w7QvVNGC3uEa8DONJwnz1er7VXJPeLaW5D1P+DcmTgw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cIN4LH0ARMZtldEf7zwpDZXToNYdR+4ROlThNC7tBz3ggqTbk9sCImR8aMWhsWc/zdC/rXLHIfSlAbNHuPpvI5emBO/kuR8Pdm2JUeV3QW4IxzBJEju9Hb9dr+Yhj3WOX7w4Zz1V9Se+KkylcXPUzgLxOYa95Ro0IUVWiXFoAzR5fVZ6U47QVL9g4+SX9vX/oWhlek/6SlkYW0uyctJTgqDJDUjGslzZy/ox8Kgo6k3/GVznU2K7pet+QacDfPHHxYQVhFQ6MFx9dxgIEAZid6DlAZ/dBU5084GwqXKlQkHyEiEREMEa5EM/JUNBNr4soJNruysfDn8UeKp6GUs+/g==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Anthony Perard <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Fri, 22 Apr 2022 11:56:31 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 22.04.2022 13:14, Roger Pau Monné wrote:
> On Fri, Apr 22, 2022 at 12:57:03PM +0200, Jan Beulich wrote:
>> The reference "to shadow the resident processes" is applicable to
>> domains (potentially) running in shadow mode only. Adjust the
>> calculations accordingly.
>>
>> In dom0_paging_pages() also take the opportunity and stop open-coding
>> DIV_ROUND_UP().
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> RFC: I'm pretty sure I can't change a public libxl function (deprecated
>>      or not) like this, but I also don't know how I should go about
>>      doing so (short of introducing a brand new function and leaving the
>>      existing one broken).
> 
> You have to play with LIBXL_API_VERSION, see for example:
> 
> 1e3304005e libxl: Make libxl_retrieve_domain_configuration async
> 
>>
>> --- a/tools/include/libxl_utils.h
>> +++ b/tools/include/libxl_utils.h
>> @@ -23,7 +23,10 @@ const
>>  #endif
>>  char *libxl_basename(const char *name); /* returns string from strdup */
>>  
>> -unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb, 
>> unsigned int smp_cpus);
>> +unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb,
>> +                                               unsigned int smp_cpus,
>> +                                               libxl_domain_type type,
>> +                                               bool hap);
> 
> Iff we are to change this anyway, we might as well rename the
> function and introduce a proper
> libxl_get_required_{paging,p2m}_memory?
> 
> It seems wrong to have a function explicitly named 'shadow' that takes
> a 'hap' parameter.
> 
> If you introduce a new function there's no need to play with the
> LIBXL_API_VERSION and you can just add a new LIBXL_HAVE_FOO define.

With the original function deprecated, I don't see why I'd need to
make a new public function - my fallback plan was (as also suggested
by Jürgen) to make a new internal function.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.