[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 00/11] PVH VCPU hotplug support



This series adds support for ACPI-based VCPU hotplug for unprivileged
PVH guests.

New XEN_DOMCTL_set_avail_vcpus is introduced and is called during
guest creation and in response to 'xl vcpu-set' command. This domctl
updates GPE0's status and enable registers and sends an SCI to the
guest using (newly added) VIRQ_SCI.

I decided not to implement enforcement of avail_vcpus in this series
after I realized that HVM guests (just like PV) also start all
max_vcpus initially and then offline them
(firmware/hvmloader/smp.c:smp_initialise()). Together with the fact
that HVM hotplug only works in one direction (i.e. adding VCPUs but
not removing them) I think it's pretty clear that hotplug needs to be
fixed in general, and adding avail_vcpus enforcement should be part of that
fix.

I also didn't include changes to getdomaininfo to expanded with getting
number of available vcpus, mostly because I haven't needed it so
far. For live migration (where Andrew thought it might be needed) we
rely on xenstore's "cpu/available" value to keep hypervisor up-to-date
(see libxl__update_avail_vcpus_xenstore()) (I should say that I
haven't tested live migration, only save/restore but I think it's the
same codepath)

Boris Ostrovsky (11):
  x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  acpi: Define ACPI IO registers for PVH guests
  pvh: Set online VCPU map to avail_vcpus
  acpi: Make pmtimer optional in FADT
  acpi: Power and Sleep ACPI buttons are not emulated for PVH guests
  acpi: PVH guests need _E02 method
  pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  pvh/acpi: Handle ACPI accesses for PVH guests
  events/x86: Define SCI virtual interrupt
  pvh: Send an SCI on VCPU hotplug event
  docs: Describe PVHv2's VCPU hotplug procedure

 docs/misc/hvmlite.markdown            |  12 ++++
 tools/firmware/hvmloader/util.c       |   4 +-
 tools/flask/policy/modules/dom0.te    |   2 +-
 tools/flask/policy/modules/xen.if     |   4 +-
 tools/libacpi/build.c                 |   7 +++
 tools/libacpi/libacpi.h               |   2 +
 tools/libacpi/mk_dsdt.c               |  17 +++---
 tools/libacpi/static_tables.c         |  20 ++-----
 tools/libxc/include/xenctrl.h         |   5 ++
 tools/libxc/xc_domain.c               |  12 ++++
 tools/libxl/libxl.c                   |   7 +++
 tools/libxl/libxl_dom.c               |   7 +++
 tools/libxl/libxl_x86_acpi.c          |   6 +-
 xen/arch/arm/domain.c                 |   5 ++
 xen/arch/x86/domain.c                 |  16 ++++++
 xen/arch/x86/hvm/ioreq.c              | 103 ++++++++++++++++++++++++++++++++++
 xen/common/domctl.c                   |  26 +++++++++
 xen/common/event_channel.c            |   7 ++-
 xen/include/asm-x86/domain.h          |   1 +
 xen/include/asm-x86/hvm/domain.h      |   6 ++
 xen/include/public/arch-x86/xen-mca.h |   2 -
 xen/include/public/arch-x86/xen.h     |   7 ++-
 xen/include/public/domctl.h           |   9 +++
 xen/include/public/hvm/ioreq.h        |  25 +++++++++
 xen/include/xen/domain.h              |   1 +
 xen/include/xen/event.h               |   8 +++
 xen/include/xen/sched.h               |   6 ++
 xen/xsm/flask/hooks.c                 |   3 +
 xen/xsm/flask/policy/access_vectors   |   2 +
 29 files changed, 299 insertions(+), 33 deletions(-)

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.