[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PVH CPU hotplug design document



>>> On 17.01.17 at 16:27, <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 01/17/2017 09:44 AM, Jan Beulich wrote:
>>>>> On 17.01.17 at 15:13, <roger.pau@xxxxxxxxxx> wrote:
>>> There's only one kind of PVHv2 guest that doesn't require ACPI, and that 
>>> guest
>>> type also doesn't have emulated local APICs. We agreed that this model was
>>> interesting from things like unikernels DomUs, but that's the only reason 
>>> why
>>> we are providing it. Not that full OSes couldn't use it, but it seems
>>> pointless.
>> You writing things this way makes me notice another possible design
>> issue here: Requiring ACPI is a bad thing imo, with even bare hardware
>> going different directions for at least some use cases (SFI being one
>> example). Hence I think ACPI should - like on bare hardware - remain
>> an optional thing. Which in turn require _all_ information obtained from
>> ACPI (if available) to also be available another way. And this other
>> way might by hypercalls in our case.
> 
> 
> At the risk of derailing this thread: why do we need vCPU hotplug for
> dom0 in the first place? What do we gain over "echo {1|0} >
> /sys/devices/system/cpu/cpuX/online" ?
> 
> I can see why this may be needed for domUs where Xen can enforce number
> of vCPUs that are allowed to run (which we don't enforce now anyway) but
> why for dom0?

Good that you now ask this too - that's the PV hotplug mechanism,
and I've been saying all the time that this should be just fine for PVH
(Dom0 and DomU).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.