[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus





On 11/23/2016 09:16 AM, Boris Ostrovsky wrote:
On 11/23/2016 08:58 AM, Jan Beulich wrote:
On 23.11.16 at 14:33, <boris.ostrovsky@xxxxxxxxxx> wrote:
On 11/23/2016 03:09 AM, Jan Beulich wrote:
On 23.11.16 at 00:47, <boris.ostrovsky@xxxxxxxxxx> wrote:
I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with
XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to
perform two (or more, if >32 VCPUs) hypercalls and the logic on the
hypervisor side is almost the same as the ioreq handling that this
series added in patch 8.
Why would there be multiple hypercalls needed? (I guess I may need
to see the prototype to understand.)
The interface is

#define XEN_DOMCTL_acpi_access 81
struct xen_domctl_acpi_access {
    uint8_t rw;
    uint8_t bytes;
    uint16_t port;
    uint32_t val;
};

And so as an example, to add VCPU1 to already existing VCPU0:

/* Update the VCPU map*/
val = 3;
xc_acpi_access(ctx->xch, domid, WRITE, 0xaf00, 1/*bytes*/, &val);

/* Set event status in GPE block */
val= 1 << 2;
xc_acpi_access(ctx->xch, domid, WRITE, 0xafe0, 1/*bytes*/, &val);


If we want to support ACPI registers in memory space then we need to add
'uint8_t space' and extend val to uint64_t.

This obviously was meant to be 'port', not 'val'.

It's a domctl, so we can change it down the road, but of course if
would be nice if this was supporting a proper GAS (without access
width I guess) from the beginning.

We also may want to make val
a uint64_t to have fewer hypercalls for VCPU map updates. (We could, in
fact, pass a pointer to the map but I think a scalar is cleaner.)
Well, mostly re-using GAS layout, we'd allow for up to 255 bits
per operation. That might be fine for now, but the need for the
caller to break things up seems awkward to me, so I'd in fact
lean towards passing a handle.

I haven't thought about representing this as proper GAS. Then it looks
like we can have up to 256 bits worth of data and then pass the handle.
We may still need to make more than one call to read or write full VCPU
map but that should be be an exceedingly rare event, e.g. when we add
more than 256 VCPUs at once. And since we can only have 128 VCPUs now we
don't need to support this in the toolstack right away.
Of course, this still requires two hypercalls --- one for map update and
one for GPE.


BTW, another thing I am afraid I will have to do is extract hvm_hw_timer_pm from PMTState and move it up to hvm_domain.

It already has pm1a_{sts|en} and this series adds its own version of that register, which doesn't make sense. Instead, both will refer to
hvm_domain.acpi (which I will add, including tmr_val).

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.