[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus


  • To: Ian Jackson <iwj@xxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 25 Feb 2021 17:30:18 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GJMxrc93AYfTb0W2zQjsUvm/jjcfAHB6Z4+rCGsFggo=; b=d5AoX1Q7nAeaxquOzSm19FDGjoCd2tbFtfT/pk5DzHwn1WW3vMUt7qaigWp7HbJX9P9ZkBYQvG4NeEVV3JAFwP78x5ugkSBMr0FRiVZxaOlwADahEn5wUjpsUq5AIrK6Jx7dgFX9X4BubVDjwOPADEI8myWyHE2tGAtOcBXJkFdGELEQAa6WtQ/rHTyWhrrqiT7dWaOyn/tzf1OFds1oyCA3LqmKxGO6P1NYE9Zb0Q3/Osn0PsjLIchR8N5U36kGrnbyLAWm0UNzpVa9ld0Dz/6kRAPjQF7SCKivxdGUBgrI3zcJ9vc9aNamBJzgLqESC3A5dTplmoUw5DtQ+wxxJg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gn2yVX9MGqcWvT12cxG6paRC2mcjWd04qpzDltIRH4Wd9hS7cly58Wh97X6AqNUB1Mdw/6pE/cOl+7phkCdB9GJxZNRNtQCf7DNh18TogZ7WhkKf97KMsJnTC6PhZJchoqN4sU+pu6OJ2woqLgtdR6Ufhf/oEbVhz4oqK1Ps/MJ656l9kJFAT9OQIzYB2q4PltFe7+RSgxfxWoK3VEJxB9vKvqRuXjpZRQFwWD90FD4/PInasozaSs7VblVj5TSZCgKQxwFVboG3nwHTgndmafkpeCa+cehyZk6HxmNkJh5I7fZacpApMkcPp+D4EBtq14YWyT3mAKohlrk4/97H0w==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 25 Feb 2021 17:30:48 +0000
  • Ironport-sdr: gSIKiFCzRJmIea7cXePnFdofIoLVGidRly/gCaC4OlEvU1qi0R9BDQYqxUJiORYvbxbRRljVTq YmRpDoa93P3Q1G2SlATkF04ERHD75trrptGuDSvEJlIwZ1AcUzV3zqXPMTSUTuWZ0D4hrhY2kR xtQ2FYG6El97yygKqVlvXSQQbQq5h6qYiiwRfdcuQH7G6ADgjRiEA4uUnIPx8ioEwisOWUTMFX GFpZvq9BaeZXA+CTzWChQoaGyyqA7/ikln92tJeSAHVf50ZNfE71w5hcoAe0IYJ6cka2laKJzQ auA=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 25/02/2021 17:21, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus"):
>> Curiously absent from the stable API/ABIs is an ability to query the number 
>> of
>> vcpus which a domain has.  Emulators need to know this information in
>> particular to know how many stuct ioreq's live in the ioreq server mappings.
>>
>> In practice, this forces all userspace to link against libxenctrl to use
>> xc_domain_getinfo(), which rather defeats the purpose of the stable 
>> libraries.
> Wat

Yeah...  My reaction was similar.

>
>> For 4.15.  This was a surprise discovery in the massive ABI untangling effort
>> I'm currently doing for XenServer's new build system.
> Given that this is a new feature at a late stage I am going to say
> this:
>
> I will R-A it subject to it getting *two* independent Reviewed-by.
>
> I will try to one of them myself :-).
>
> ...
>
>> +/*
>> + * XEN_DMOP_nr_vcpus: Query the number of vCPUs a domain has.
>> + *
>> + * The number of vcpus a domain has is fixed from creation time.  This bound
>> + * is applicable to e.g. the vcpuid parameter of XEN_DMOP_inject_event, or
>> + * number of struct ioreq objects mapped via XENMEM_acquire_resource.
> AIUI from the code, the value is the maximum number of vcpus, in the
> sense that they are not necessarily all online.  In which case I think
> maybe you want to mention that here ?

Yeah - there is no guarantee that they're all online, or running.

Emulators tend to attach before the domain starts executing anyway.  The
important thing they need to do is loop through each struct ioreq in the
ioreq_server mapping to read the domid and bind the per-vcpu event
channel for notification of work to do.

The totally gross way of not needing this API is to scan through the
mapping and identify the first struct ioreq which has 0 listed for an
event channel, which is not a construct I wish to promote.

>
>> diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
>> index 398993d5f4..cbbd20c958 100644
>> --- a/xen/include/xlat.lst
>> +++ b/xen/include/xlat.lst
>> @@ -107,6 +107,7 @@
>>  ?   dm_op_set_pci_intx_level        hvm/dm_op.h
>>  ?   dm_op_set_pci_link_route        hvm/dm_op.h
>>  ?   dm_op_track_dirty_vram          hvm/dm_op.h
>> +?   dm_op_nr_vcpus                  hvm/dm_op.h
>>  !   hvm_altp2m_set_mem_access_multi hvm/hvm_op.h
>>  ?   vcpu_hvm_context                hvm/hvm_vcpu.h
>>  ?   vcpu_hvm_x86_32                 hvm/hvm_vcpu.h
>> -- 
> I have no idea what even.  I read the comment at the top of the file.
>
> So for *everything except the xlat.lst change*
> Reviewed-by: Ian Jackson <iwj@xxxxxxxxxxxxxx>

Thanks.

This is the magic to make this hunk:

@@ -641,6 +651,7 @@ CHECK_dm_op_map_mem_type_to_ioreq_server;
 CHECK_dm_op_remote_shutdown;
 CHECK_dm_op_relocate_memory;
 CHECK_dm_op_pin_memory_cacheattr;
+CHECK_dm_op_nr_vcpus;
 
 int compat_dm_op(domid_t domid,
                  unsigned int nr_bufs,

work, to do a build time check that the structure is identical between
32bit and 64bit builds.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.