[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 00/23] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion on Intel platform



On Fri, Mar 17, 2017 at 07:27:00PM +0800, Lan Tianyu wrote:
> This patchset is to introduce vIOMMU framework and add virtual VTD's
> interrupt remapping support according "Xen virtual IOMMU high level
> design doc V3"(https://lists.xenproject.org/archives/html/xen-devel/
> 2016-11/msg01391.html).
> 
> - vIOMMU framework
> New framework provides viommu_ops and help functions to abstract
> vIOMMU operations(E,G create, destroy, handle irq remapping request
> and so on). Vendors(Intel, ARM, AMD and son) can implement their
> vIOMMU callbacks.
> 
> - Xen vIOMMU device model in Qemu 
> It's in charge of create/destroy vIOMMU in hypervisor via new vIOMMU
> DMOP hypercalls. It will be required to pass virtual devices DMA
> request to hypervisor when enable IOVA(DMA request without PASID)
> function.
> 
> - Virtual VTD
> In this patchset, we enable irq remapping function and covers both
> MSI and IOAPIC interrupts. Don't support post interrupt mode emulation
> and post interrupt mode enabled on host with virtual VTD. Will add
> later.   
> 
> Chao Gao (19):
>   Tools/libxc: Add viommu operations in libxc
>   Tools/libacpi: Add DMA remapping reporting (DMAR) ACPI table
>     structures
>   Tools/libacpi: Add new fields in acpi_config to build DMAR table
>   Tools/libacpi: Add a user configurable parameter to control vIOMMU
>     attributes
>   Tools/libxl: Inform device model to create a guest with a vIOMMU
>     device
>   x86/hvm: Introduce a emulated VTD for HVM
>   X86/vvtd: Add MMIO handler for VVTD
>   X86/vvtd: Set Interrupt Remapping Table Pointer through GCMD
>   X86/vvtd: Process interrupt remapping request
>   X86/vvtd: decode interrupt attribute from IRTE
>   X86/vioapic: Hook interrupt delivery of vIOAPIC
>   X86/vvtd: Enable Queued Invalidation through GCMD
>   X86/vvtd: Enable Interrupt Remapping through GCMD
>   x86/vpt: Get interrupt vector through a vioapic interface
>   passthrough: move some fields of hvm_gmsi_info to a sub-structure
>   Tools/libxc: Add a new interface to bind msi-ir with pirq
>   X86/vmsi: Hook guest MSI injection
>   X86/vvtd: Handle interrupt translation faults
>   X86/vvtd: Add queued invalidation (QI) support
> 
> Lan Tianyu (4):
>   VIOMMU: Add vIOMMU helper functions to create, destroy and query
>     capabilities
>   DMOP: Introduce new DMOP commands for vIOMMU support
>   VIOMMU: Add irq request callback to deal with irq remapping
>   VIOMMU: Add get irq info callback to convert irq remapping request
> 
>  tools/libacpi/acpi2_0.h                         |   45 +
>  tools/libacpi/build.c                           |   58 ++
>  tools/libacpi/libacpi.h                         |   12 +
>  tools/libs/devicemodel/core.c                   |   69 ++
>  tools/libs/devicemodel/include/xendevicemodel.h |   35 +
>  tools/libs/devicemodel/libxendevicemodel.map    |    3 +
>  tools/libxc/include/xenctrl.h                   |   17 +
>  tools/libxc/include/xenctrl_compat.h            |    5 +
>  tools/libxc/xc_devicemodel_compat.c             |   18 +
>  tools/libxc/xc_domain.c                         |   55 +
>  tools/libxl/libxl_create.c                      |   12 +-
>  tools/libxl/libxl_dm.c                          |    9 +
>  tools/libxl/libxl_dom.c                         |   85 ++
>  tools/libxl/libxl_types.idl                     |    8 +
>  tools/xl/xl_parse.c                             |   54 +
>  xen/arch/x86/Makefile                           |    1 +
>  xen/arch/x86/hvm/Makefile                       |    1 +
>  xen/arch/x86/hvm/dm.c                           |   29 +
>  xen/arch/x86/hvm/irq.c                          |   10 +
>  xen/arch/x86/hvm/vioapic.c                      |   36 +
>  xen/arch/x86/hvm/vmsi.c                         |   17 +-
>  xen/arch/x86/hvm/vpt.c                          |    2 +-
>  xen/arch/x86/hvm/vvtd.c                         | 1229 
> +++++++++++++++++++++++
>  xen/arch/x86/viommu.c                           |   40 +
>  xen/common/Makefile                             |    1 +
>  xen/common/domain.c                             |    3 +
>  xen/common/viommu.c                             |  119 +++
>  xen/drivers/passthrough/io.c                    |  183 +++-
>  xen/drivers/passthrough/vtd/iommu.h             |  213 +++-
>  xen/include/asm-arm/viommu.h                    |   38 +
>  xen/include/asm-x86/hvm/vioapic.h               |    1 +
>  xen/include/asm-x86/msi.h                       |    3 +
>  xen/include/asm-x86/viommu.h                    |   68 ++
>  xen/include/public/arch-x86/hvm/save.h          |   19 +
>  xen/include/public/domctl.h                     |    7 +
>  xen/include/public/hvm/dm_op.h                  |   39 +
>  xen/include/public/viommu.h                     |   38 +
>  xen/include/xen/hvm/irq.h                       |   20 +-
>  xen/include/xen/sched.h                         |    2 +
>  xen/include/xen/viommu.h                        |   74 ++
>  40 files changed, 2601 insertions(+), 77 deletions(-)
>  create mode 100644 xen/arch/x86/hvm/vvtd.c
>  create mode 100644 xen/arch/x86/viommu.c
>  create mode 100644 xen/common/viommu.c
>  create mode 100644 xen/include/asm-arm/viommu.h
>  create mode 100644 xen/include/asm-x86/viommu.h
>  create mode 100644 xen/include/public/viommu.h
>  create mode 100644 xen/include/xen/viommu.h

Thanks! So you add all this vIOMMU code, but the maximum number of allowed
vCPUs for HVM guests is still limited to 128 (HVM_MAX_VCPUS is not touched). Is
there any missing pieces in order to bump this?

Also, have you tested if this series works with PVH guests? Boris added PVH
support to Linux not long ago, so you should be able to test it just by picking
the latest Linux kernel.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.