[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3



From: Chen Baozi <baozich@xxxxxxxxx>

Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. Increasing the size
makes the number expand to 16 because of AFF0 restriction on GICv3.
To create a guest up to 128 vCPUs, which is the maxium number that GIC-500
can support, this patchset uses the AFF1 information to create a mapping
relation between vCPUID and vMPIDR and deals with the related issues.

These patches are written based upon Julien's "GICv2 on GICv3" series
and the IROUTER emulation cleanup patch.

Changes form V5:
* Rework gicv3_sgir_to_cpumask in #5
* Rework #8 to split arch_domain_create into two parts:
  - arch_domain_preinit to initialise vgic_ops before evtchn_init is
    called
  - the rest of logic remains in arch_domain_create
* Use a field value in struct vgic_ops instead of the function point
  for max_vcpus.
* Minor changes according to previous reviews.

Changes from V4:
* Split the patch 4/8 of V3 into two part:
  - Use cpumask_t type for vcpu_mask in vgic_to_sgi.
  - Use AFF1 when translating ICC_SGI1R_EL1 to cpumask.
* Use a more efficient algorithm when calculate cpumask.
* Add a patch to call arch_domain_create before evtchn_init, because
  evtchn_init needs vgic info which is initialised during
  acrh_domain_create.
* Get the max vcpu info from vgic_ops.
* Minor changes according to previous reviews.

Changes from V3:
* Drop the wrong patch that altering domain_max_vcpus to a macro.
* Change the domain_max_vcpus to return value accodring to the version
  of the vGIC in used.

Changes from V2:
* Reorder the patch which increases MAX_VIRT_CPUS to the last to make
  this series bisectable.
* Drop the dynamic re-distributor region allocation patch in tools.
* Use cpumask_t type instead of unsigned long in vgic_to_sgi and do the
  translation from GICD_SGIR to vcpu_mask in both vGICv2 and vGICv3.
* Make domain_max_vcpus be alias of max_vcpus in struct domain

Changes from V1:
* Use the way that expanding the GICR address space to support up to 128
  redistributor in guest memory layout rather than use the dynamic
  allocation.
* Add support to include AFF1 information in vMPIDR/logical CPUID.

Chen Baozi (10):
  xen/arm: gic-v3: Increase the size of GICR in address space for guest
  xen/arm: Add functions of mapping between vCPUID and virtual affinity
  xen/arm: Use the new functions for vCPUID/vaffinity transformation
  xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi
  xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask
  tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU
  xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity
  xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
  xen/arm: make domain_max_vcpus return value from vgic_ops
  xen/arm64: increase MAX_VIRT_CPUS to 128 on arm64

 tools/libxl/libxl_arm.c           | 14 ++++++-
 xen/arch/arm/domain.c             | 85 ++++++++++++++++++++++++++-------------
 xen/arch/arm/domain_build.c       | 14 +++++--
 xen/arch/arm/vgic-v2.c            | 19 +++++++--
 xen/arch/arm/vgic-v3.c            | 50 ++++++++++++++++++++---
 xen/arch/arm/vgic.c               | 45 +++++++++------------
 xen/arch/arm/vpsci.c              |  5 +--
 xen/arch/x86/domain.c             |  6 +++
 xen/common/domain.c               |  3 ++
 xen/include/asm-arm/config.h      |  4 ++
 xen/include/asm-arm/domain.h      | 42 ++++++++++++++++++-
 xen/include/asm-arm/gic.h         |  1 +
 xen/include/asm-arm/gic_v3_defs.h |  4 ++
 xen/include/asm-arm/vgic.h        |  4 +-
 xen/include/public/arch-arm.h     |  4 +-
 xen/include/xen/domain.h          |  2 +
 16 files changed, 226 insertions(+), 76 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.