[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xen-devel][vNUMA v2][PATCH 0/8] VM memory mgmt for NUMA



On Sun, Aug 01, 2010 at 03:00:31PM -0700, Dulloor wrote:
> Sorry for the delay. I have been busy with other things.

Np. Can you CC these patches in the future to Andre?
His email is Andre Przywara <andre.przywara@xxxxxxx>

In the meantime,  I am CC-ing him here.
> 
> 
> Summary of the patches :
> In this patch series, we implement the following ~
> 
> [1] Memory allocation schemes for VMs on NUMA platforms : The specific
> allocation allocation strategies available as configuration parameters are -
> 
>         * CONFINE - Confine the VM memory to a single NUMA node.
>           [config]
>           strategy = "confine"
> 
>         * STRIPE - Stripe the VM memory across a specified number of nodes.
>           [config]
>           strategy = "stripe"
>           vnodes = <num>
>           stripesz = <in pages>
> 
>         * SPLIT - Split the VM memory across a specified number of nodes
>           to construct virtual nodes, which are then exposed to the VM.
>           For now, we require the number of vnodes and number of vcpus to
>           be powers of 2 (for symmetric distribution), as opposed to using
>           multiples.
>           [config]
>           strategy = "split"
>           vnodes = <num>
> 
>         * AUTO - Choose a scheme automatically, based on memory distribution
>           across the nodes. The strategy attempts CONFINE and STRIPE(by
>           dividing memory in equal parts) in that order. If both fail, then
>           it reverts to the existing non-numa allocation.
>           [config]
>           strategy = "auto"
> 
>         * No Configuration - No change from existing behaviour.
> 
> [2] HVM NUMA guests : If the user specifies "split" strategy, we expose the
> virtual nodes to the HVM (SRAT/SLIT).
> 
> [3] Disable migration : For now, the allocation information is not preserved
> across migration, so we just disable migration. We will address this in the 
> next
> patch series.
> 
> [4] PoD (Populate on Demand) : For now, PoD is disabled internally if a NUMA
> allocation strategy is specified and applied to a VM. We will address
> this in the
> next patch series.
> 
> Changes from previous version :
> [1] The guest interface structure has been modified per Keir's suggestions.
> Most changes from previous version are due to this.
> [2] Cleaned up debug code in setup_guest (spotted by George).
> 
> 
> -Dulloor
> 
> Signed-off-by: Dulloor <dulloor@xxxxxxxxx>
> 
> --
>  tools/firmware/hvmloader/acpi/acpi2_0.h |   64 ++++++
>  tools/firmware/hvmloader/acpi/build.c   |  122 ++++++++++++
>  tools/libxc/Makefile                    |    2 +
>  tools/libxc/ia64/xc_ia64_hvm_build.c    |    1 +
>  tools/libxc/xc_cpumap.c                 |   88 +++++++++
>  tools/libxc/xc_cpumap.h                 |  113 +++++++++++
>  tools/libxc/xc_dom_numa.c               |  901
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  tools/libxc/xc_dom_numa.h               |   73 +++++++
>  tools/libxc/xc_hvm_build.c              |  574
> ++++++++++++++++++++++++++++++++++++++++------------------
>  tools/libxc/xenctrl.h                   |   19 +
>  tools/libxc/xenguest.h                  |    1 +
>  tools/libxl/libxl.h                     |    1 +
>  tools/libxl/libxl_dom.c                 |    1 +
>  tools/libxl/xl_cmdimpl.c                |   44 ++++
>  tools/python/xen/lowlevel/xc/xc.c       |    2 +-
>  xen/include/public/arch-x86/dom_numa.h  |   91 +++++++++
>  xen/include/public/dom_numa.h           |   33 +++
>  xen/include/public/hvm/hvm_info_table.h |   10 +-
>  18 files changed, 1954 insertions(+), 186 deletions(-)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.