[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC XEN PATCH 00/16] Add vNVDIMM support to HVM domains



Overview
========
This RFC Xen patch series along with corresponding patch series of
QEMU, Linux kernel and ndctl implements the basic functionality of
vNVDIMM for HVM domains.

It currently supports to assign host pmem devices or files on host
pmem devices to HVM domains as virtual NVDIMM devices. Other functions
including DSM, hotplug, RAS and flush via ACPI will be implemented by
later patches.

Design and Implementation
=========================
The design of vNVDIMM can be found at
  https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg01921.html.

All patch series can be found at
  Xen:          https://github.com/hzzhan9/xen.git nvdimm-rfc-v1
  QEMU:         https://github.com/hzzhan9/qemu.git xen-nvdimm-rfc-v1
  Linux kernel: https://github.com/hzzhan9/nvdimm.git xen-nvdimm-rfc-v1
  ndctl:        https://github.com/hzzhan9/ndctl.git pfn-xen-rfc-v1

For Xen patches,
 - Patch 01 - 05 implement the hypervisor part to map host pmem pages
   to guest;
 - Patch 06 - 11 implement the mechanism to pass guest ACPI tables and
   namespace devices from QEMU;
 - Patch 12 parses the xl vNVDIMM configs;
 - Patch 13 - 16 add the toolstack part to map host pmem devices or
   files on host pmem devices.

How to test
===========
1. Check out Xen and QEMU from above repositories and branches. Replace
   the default qemu-xen with the checked out QEMU, build and install Xen.

2. Check out Linux kernel from above repository and branch. Build and
   install the kernel as Dom0 kernel. Make sure the following kernel
   configs are selected as y or m:
       CONFIG_ACPI_NFIT
       CONFIG_LIBNVDIMM
       CONFIG_BLK_DEV_PMEM
       CONFIG_NVDIMM_PFN
       CONFIG_FS_DAX

3. Build and install ndctl from above repository and branch.

4. Boot from Xen and Dom0 Linux kernel built in step 1 and 2.

5. Suppose there is one host pmem namespace that is recognized by
   Dom0 Linux NVDIMM driver as namespace0.0 and block device
   /dev/pmem0. Turn it into Xen mode by ndctl:
       ndctl create-namespace -f -e namespace0.0 -m memory -M xen

   If the above command succeeds, following messages or similar should
   appear in Xen dmesg:
       (XEN) pmem: pfns     0xa40000 - 0xb40000
       (XEN)       reserved 0xa40002 - 0xa44a00
       (XEN)       data     0xa44a00 - 0xb40000

   The first line indicates the physical pages of the entire pmem
   namespace. The second line indicates the physical pages of the
   reserved area in the namespace that are used by Xen to put
   management data structures (i.e. frame table and M2P table). The
   third line indicates the physical pages in the namespace that can
   be used by Dom0 and HVM domU.

6-a. You can map the entire namespace to a HVM domU by adding the
   following line to its xl config file:
       vnvdimms = [ '/dev/pmem0' ]

6-b. Or, you can map a file on the namespace to a HVM domU:
       mkfs.ext4 /dev/pmem0
       mount -o dax /dev/pmem0 /mnt/dax/
       dd if=/dev/zero of=/mnt/dax/foo bs=1G count=2
   and add the following line to the domain config:
       vnvdimms = [ '/mnt/dax/foo' ]

7. If the NVDIMM driver is built with the guest Linux kernel, a block
   device /dev/pmem0 will be recognized by the guest kernel, and you
   can use it as normal.

You can take above steps in the nested virtualization environment
provided by KVM, especially when NVDIMM hardware is not widely
available yet.
1. Load KVM module with nested on
       modprobe kvm-intel nested=1
       
2. Create a file as the backend of the virtual NVDIMM device used in L1.
       dd if=/dev/zero of=/tmp/nvdimm bs=1G count=4
       
3. Start QEMU v2.6 or newer.
       qemu-system-x86_64 -enable-kvm -smp 4 -cpu qemu64,+vmx \
                          -hda /path/to/guest/image \
                          -machine pc,nvdimm \
                          -m 8G,slots=2,maxmem=16G \
                          -object 
memory-backend-file,id=mem1,share,mem-path=/tmp/nvdimm,size=4G \
                          -device nvdimm,memdev=mem1,id=nv1

4. Take previous steps 1 - 7 in L1.
       

Haozhong Zhang (16):
  01/ x86_64/mm: explicitly specify the location to place the frame table
  02/ x86_64/mm: explicitly specify the location to place the M2P table
  03/ xen/x86: add a hypercall XENPF_pmem_add to report host pmem regions
  04/ xen/x86: add XENMEM_populate_pmemmap to map host pmem pages to guest
  05/ xen/x86: release pmem pages at domain destroy
  06/ tools: reserve guest memory for ACPI from device model
  07/ tools/libacpi: add callback acpi_ctxt.p2v to get a pointer from physical 
address
  08/ tools/libacpi: expose details of memory allocation callback
  09/ tools/libacpi: add callbacks to access XenStore
  10/ tools/libacpi: add a simple AML builder
  11/ tools/libacpi: load ACPI built by the device model
  12/ tools/libxl: build qemu options from xl vNVDIMM configs
  13/ tools/libxl: add support to map host pmem device to guests
  14/ tools/libxl: add support to map files on pmem devices to guests
  15/ tools/libxl: handle return code of libxl__qmp_initializations()
  16/ tools/libxl: initiate pmem mapping via qmp callback

 tools/firmware/hvmloader/Makefile       |   3 +-
 tools/firmware/hvmloader/util.c         |  70 +++++++
 tools/firmware/hvmloader/util.h         |   3 +
 tools/firmware/hvmloader/xenbus.c       |  20 ++
 tools/libacpi/acpi2_0.h                 |   2 +
 tools/libacpi/aml_build.c               | 254 +++++++++++++++++++++++++
 tools/libacpi/aml_build.h               |  83 ++++++++
 tools/libacpi/build.c                   | 216 +++++++++++++++++++++
 tools/libacpi/libacpi.h                 |  19 ++
 tools/libxc/include/xc_dom.h            |   1 +
 tools/libxc/include/xenctrl.h           |   8 +
 tools/libxc/xc_dom_x86.c                |   7 +
 tools/libxc/xc_domain.c                 |  14 ++
 tools/libxl/Makefile                    |   5 +-
 tools/libxl/libxl_create.c              |   4 +-
 tools/libxl/libxl_dm.c                  | 113 ++++++++++-
 tools/libxl/libxl_dom.c                 |  25 +++
 tools/libxl/libxl_nvdimm.c              | 281 +++++++++++++++++++++++++++
 tools/libxl/libxl_nvdimm.h              |  45 +++++
 tools/libxl/libxl_qmp.c                 |  64 +++++++
 tools/libxl/libxl_types.idl             |   8 +
 tools/libxl/libxl_x86_acpi.c            |  36 ++++
 tools/libxl/xl_cmdimpl.c                |  16 ++
 xen/arch/x86/Makefile                   |   1 +
 xen/arch/x86/domain.c                   |   5 +
 xen/arch/x86/platform_hypercall.c       |   7 +
 xen/arch/x86/pmem.c                     | 325 ++++++++++++++++++++++++++++++++
 xen/arch/x86/x86_64/mm.c                |  77 +++++++-
 xen/common/domain.c                     |   3 +
 xen/common/memory.c                     |  31 +++
 xen/include/asm-x86/mm.h                |   4 +
 xen/include/public/hvm/hvm_xs_strings.h |  11 ++
 xen/include/public/memory.h             |  14 +-
 xen/include/public/platform.h           |  14 ++
 xen/include/xen/pmem.h                  |  42 +++++
 xen/include/xen/sched.h                 |   3 +
 xen/xsm/flask/hooks.c                   |   1 +
 37 files changed, 1818 insertions(+), 17 deletions(-)
 create mode 100644 tools/libacpi/aml_build.c
 create mode 100644 tools/libacpi/aml_build.h
 create mode 100644 tools/libxl/libxl_nvdimm.c
 create mode 100644 tools/libxl/libxl_nvdimm.h
 create mode 100644 xen/arch/x86/pmem.c
 create mode 100644 xen/include/xen/pmem.h

-- 
2.10.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.