[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH RFC v2 0/7] xen: vNUMA introduction
- To: Elena Ufimtseva <ufimtseva@xxxxxxxxx>
- From: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
- Date: Fri, 13 Sep 2013 12:19:02 +0100
- Cc: keir@xxxxxxx, Ian.Campbell@xxxxxxxxxx, stefano.stabellini@xxxxxxxxxxxxx, dario.faggioli@xxxxxxxxxx, lccycc123@xxxxxxxxx, ian.jackson@xxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxx, JBeulich@xxxxxxxx, sw@xxxxxxxxx
- Delivery-date: Fri, 13 Sep 2013 11:19:26 +0000
- List-id: Xen developer discussion <xen-devel.lists.xen.org>
On 13/09/13 09:49, Elena Ufimtseva wrote:
This series of patches introduces vNUMA topology awareness and
provides interfaces and data structures to enable vNUMA for
PV domU guests.
vNUMA topology support should be supported by PV guest kernel.
Corresponging patches should be applied.
Introduction
-------------
vNUMA topology is exposed to the PV guest to improve performance when running
workloads on NUMA machines.
XEN vNUMA implementation provides a way to create vNUMA-enabled guests on
NUMA/UMA
and map vNUMA topology to physical NUMA in a optimal way.
XEN vNUMA support
Current set of patches introduces subop hypercall that is available for
enlightened
PV guests with vNUMA patches applied.
Domain structure was modified to reflect per-domain vNUMA topology for use in
other
vNUMA-aware subsystems (e.g. ballooning).
libxc
libxc provides interfaces to build PV guests with vNUMA support and in case of
NUMA
machines provides initial memory allocation on physical NUMA nodes. This
implemented by
utilizing nodemap formed by automatic NUMA placement. Details are in patch #3.
libxl
libxl provides a way to predefine in VM config vNUMA topology - number of
vnodes,
memory arrangement, vcpus to vnodes assignment, distance map.
PV guest
As of now, only PV guest can take advantage of vNUMA functionality. vNUMA Linux
patches
should be applied and NUMA support should be compiled in kernel.
Example of booting vNUMA enabled pv domU:
NUMA machine:
cpu_topology :
cpu: core socket node
0: 0 0 0
1: 1 0 0
2: 2 0 0
3: 3 0 0
4: 0 1 1
5: 1 1 1
6: 2 1 1
7: 3 1 1
numa_info :
node: memsize memfree distances
0: 17664 12243 10,20
1: 16384 11929 20,10
VM config:
memory = 16384
vcpus = 8
name = "rcbig"
vnodes = 8
vnumamem = "2g, 2g, 2g, 2g, 2g, 2g, 2g, 2g"
vcpu_to_vnode ="5 6 7 4 3 2 1 0"
This was a bit confusing for me as the table above and the config below
don't seem to be the same.
Patchset applies to latest Xen tree
commit e008e9119d03852020b93e1d4da9a80ec1af9c75
Available at http://git.gitorious.org/xenvnuma/xenvnuma.git
Thanks for the git repo. It's probably a good idea in the future to
make a branch for each series of patches you post -- e.g., vnuma-v2 or
something like that -- so that even if you do more updates / development
people can still have access to the old set of patches. (Or have access
to the old set while you are preparing the new set.)
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|