[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM



This series is based on Paul Durrant's "x86: guest resource mapping"
(https://lists.xenproject.org/archives/html/xen-devel/2017-11/msg01735.html)
and "add vIOMMU support with irq remapping      function of virtual VT-d"
(https://lists.xenproject.org/archives/html/xen-devel/2017-11/msg01063.html).

In order to support more vcpus in hvm, this series is to remove VCPU number
constraint imposed by several components:
1. IOREQ server: current only one IOREQ page is used, which limits
   the maximum number of vcpus to 128.
2. libacpi: no x2apic entry is built in MADT and SRAT
3. Size of pre-allocated shadow memory
4. The way how we boot up APs.

This series is RFC for
1. I am not sure whether changes in patch 2 are acceptable. 
2. It depends on our VIOMMU patches which are still under review.

Change since v3:
        - Respond Wei and Roger's comments.
        - Support multiple IOREQ pages. Seeing patch 1 and 2.
        - boot APs through broadcast. Seeing patch 4.
        - unify the computation of lapic_id.
        - Add x2apic entry in SRAT.
        - Increase shadow memory according to the maximum vcpus of HVM.

Change since v2:
    1) Increase page pool size during setting max vcpu
    2) Allocate madt table size according APIC id of each vcpus
    3) Fix some code style issues.

Change since v1:
    1) Increase hap page pool according vcpu number
    2) Use "Processor" syntax to define vcpus with APIC id < 255
in dsdt and use "Device" syntax for other vcpus in ACPI DSDT table.
    3) Use XAPIC structure for vcpus with APIC id < 255
in dsdt and use x2APIC structure for other vcpus in the ACPI MADT table.

This patchset is to extend some resources(i.e, event channel,
hap and so) to support more vcpus for single VM.

Chao Gao (6):
  ioreq: remove most 'buf' parameter from static functions
  ioreq: bump the number of IOREQ page to 4 pages
  xl/acpi: unify the computation of lapic_id
  hvmloader: boot cpu through broadcast
  x86/hvm: bump the number of pages of shadow memory
  x86/hvm: bump the maximum number of vcpus to 512

Lan Tianyu (2):
  Tool/ACPI: DSDT extension to support more vcpus
  hvmload: Add x2apic entry support in the MADT and SRAT build

 tools/firmware/hvmloader/apic_regs.h    |   4 +
 tools/firmware/hvmloader/config.h       |   3 +-
 tools/firmware/hvmloader/smp.c          |  64 ++++++++++++--
 tools/libacpi/acpi2_0.h                 |  25 +++++-
 tools/libacpi/build.c                   |  57 +++++++++---
 tools/libacpi/libacpi.h                 |   9 ++
 tools/libacpi/mk_dsdt.c                 |  40 +++++++--
 tools/libxc/include/xc_dom.h            |   2 +-
 tools/libxc/xc_dom_x86.c                |   6 +-
 tools/libxl/libxl_x86_acpi.c            |   2 +-
 xen/arch/x86/hvm/hvm.c                  |   1 +
 xen/arch/x86/hvm/ioreq.c                | 150 ++++++++++++++++++++++----------
 xen/arch/x86/mm/hap/hap.c               |   2 +-
 xen/arch/x86/mm/shadow/common.c         |   2 +-
 xen/include/asm-x86/hvm/domain.h        |   6 +-
 xen/include/public/hvm/hvm_info_table.h |   2 +-
 xen/include/public/hvm/ioreq.h          |   2 +
 xen/include/public/hvm/params.h         |   8 +-
 18 files changed, 303 insertions(+), 82 deletions(-)

-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.