[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus



On 11/23/2016 03:09 AM, Jan Beulich wrote:
>>>> On 23.11.16 at 00:47, <boris.ostrovsky@xxxxxxxxxx> wrote:
>> I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with 
>> XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to 
>> perform two (or more, if >32 VCPUs) hypercalls and the logic on the 
>> hypervisor side is almost the same as the ioreq handling that this 
>> series added in patch 8.
> Why would there be multiple hypercalls needed? (I guess I may need
> to see the prototype to understand.)

The interface is

#define XEN_DOMCTL_acpi_access 81
struct xen_domctl_acpi_access {
    uint8_t rw;
    uint8_t bytes;
    uint16_t port;
    uint32_t val;
};

And so as an example, to add VCPU1 to already existing VCPU0:

/* Update the VCPU map*/
val = 3;
xc_acpi_access(ctx->xch, domid, WRITE, 0xaf00, 1/*bytes*/, &val);

/* Set event status in GPE block */
val= 1 << 2;
xc_acpi_access(ctx->xch, domid, WRITE, 0xafe0, 1/*bytes*/, &val);


If we want to support ACPI registers in memory space then we need to add
'uint8_t space' and extend val to uint64_t. We also may want to make val
a uint64_t to have fewer hypercalls for VCPU map updates. (We could, in
fact, pass a pointer to the map but I think a scalar is cleaner.)


>
>> However, I now realized that this interface will not be available to PV 
>> guests (and it will only become available to HVM guests when we move 
>> hotplug from qemu to hypervisor). And it's x86-specific.
> As you make clear below, the PV aspect is likely a non-issue. But
> why is this x86-specific? It's generic ACPI, isn't it?

Mostly because I don't know how ARM handles hotplug. I was told that ARM
does not use PRST, it uses PSCI, which I am not familiar with.

The interface is generic enough to be used by any architecture.

-boris

>
>
>> This means that PV guests will not know what the number of available 
>> VCPUs is and therefore we will not be able to enforce it. OTOH we don't 
>> know how to do that anyway since PV guests bring up all VCPUs and then 
>> offline them.
>>
>> -boris
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.