[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PVH CPU hotplug design document



On Fri, Jan 13, 2017 at 08:27:30AM -0700, Jan Beulich wrote:
> >>> On 12.01.17 at 20:00, <andrew.cooper3@xxxxxxxxxx> wrote:
> > On 12/01/17 12:13, Roger Pau Monné wrote:
> >> Extra entries are going to be added for each vCPU available to the hardware
> >> domain, up to the maximum number of supported vCPUs. Note that supported 
> >> vCPUs
> >> might be different than enabled vCPUs, so it's possible that some of these
> >> entries are also going to be marked as disabled. The entries for vCPUs on 
> >> the
> >> MADT are going to use a processor local x2 APIC structure, and the ACPI
> >> processor ID of the first vCPU is going to be UINT32_MAX - HVM_MAX_VCPUS, 
> >> in
> >> order to avoid clashes with IDs of pCPUs.
> > 
> > This is slightly problematic.  There is no restriction (so far as I am
> > aware) on which ACPI IDs the firmware picks for its objects.  They need
> > not be consecutive, logical, or start from 0.
> > 
> > If STAO is being extended to list the IDs of the physical processor
> > objects, we should go one step further and explicitly list the IDs of
> > the virtual processor objects.  This leaves us flexibility if we have to
> > avoid awkward firmware ID layouts.
> 
> I don't think we should do this - vCPU IDs are already in MADT. I do,
> however, think that we shouldn't name any specific IDs we mean to
> use for the vCPU-s, but rather merely guarantee that there won't be
> any overlap with the pCPU ones.

I also don't see the point in listing both pCPUs and vCPUs in the STAO. If a
processor ACPI ID is not listed as a pCPU, then it's a vCPU. I don't see the
case were a processor object won't be listed as either a pCPU or a vCPU, which
renders one of the lists moot, because it can be derived from the other one.

> >> In order to be able to perform vCPU hotplug, the vCPUs must have an ACPI
> >> processor object in the ACPI namespace, so that the OSPM can request
> >> notifications and get the value of the \_STA and \_MAT methods. This can be
> >> problematic because Xen doesn't know the ACPI name of the other processor
> >> objects, so blindly adding new ones can create namespace clashes.
> >>
> >> This can be solved by using a different ACPI name in order to describe 
> >> vCPUs in
> >> the ACPI namespace. Most hardware vendors tend to use CPU or PR prefixes 
> >> for
> >> the processor objects, so using a 'VP' (ie: Virtual Processor) prefix 
> >> should
> >> prevent clashes.
> > 
> > One system I have to hand (with more than 255 pcpus) uses Cxxx
> > 
> > To avoid namespace collisions, I can't see any option but to parse the
> > DSDT/SSDTs to at least confirm that VPxx is available to use.
> 
> And additionally using a two character name prefix would significantly
> limit the number of vCPU-s we would be able to support going forward.
> Just like above, I don't think we should specify the name here at all,
> allowing dynamic picking of suitable ones.

See my suggestion in another reply about introducing a _SB.XEN bus.

> [...]
> >> Since the position of the XEN data memory area is not know, the hypervisor 
> >> will
> >> have to replace the address 0xdeadbeef with the actual memory address where
> >> this structure has been copied. This will involve a memory search of the 
> >> AML
> >> code resulting from the compilation of the above ASL snippet.
> > 
> > This is also slightly risky.  If we need to do this, can we get a
> > relocation list from the compiled table from iasl?
> 
> I expect iasl can't do that, the more that there's not actually any
> relocation involved here. I guess we'd need a double compilation
> approach, where for both a different address is being specified.
> The diff of the two would then allow to create a relocation list.

That sounds sensible, thanks for the suggestion.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.