[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 00/10] vnuma introduction



On ven, 2014-07-18 at 01:49 -0400, Elena Ufimtseva wrote:
> vNUMA introduction
>
Hey Elena!

Thanks for this series, and in particular for this clear and complete
cover letter.

> This series of patches introduces vNUMA topology awareness and
> provides interfaces and data structures to enable vNUMA for
> PV guests. There is a plan to extend this support for dom0 and
> HVM domains.
> 
> vNUMA topology support should be supported by PV guest kernel.
> Corresponding patches should be applied.
> 
> Introduction
> -------------
> 
> vNUMA topology is exposed to the PV guest to improve performance when running
> workloads on NUMA machines. vNUMA enabled guests may be running on non-NUMA
> machines and thus having virtual NUMA topology visible to guests.
> XEN vNUMA implementation provides a way to run vNUMA-enabled guests on 
> NUMA/UMA
> and flexibly map vNUMA topology to physical NUMA topology.
> 
> Mapping to physical NUMA topology may be done in manual and automatic way.
> By default, every PV domain has one vNUMA node. It is populated by default
> parameters and does not affect performance. To use automatic way of 
> initializing
> vNUMA topology, configuration file need only to have number of vNUMA nodes
> defined. Not-defined vNUMA topology parameters will be initialized to default
> ones.
> 
> vNUMA topology is currently defined as a set of parameters such as:
>     number of vNUMA nodes;
>     distance table;
>     vnodes memory sizes;
>     vcpus to vnodes mapping;
>     vnode to pnode map (for NUMA machines).
> 
I'd include a brief explanation of what each parameter means and does.

>     XEN_DOMCTL_setvnumainfo is used by toolstack to populate domain
> vNUMA topology with user defined configuration or the parameters by default.
> vNUMA is defined for every PV domain and if no vNUMA configuration found,
> one vNUMA node is initialized and all cpus are assigned to it. All other
> parameters set to their default values.
> 
>     XENMEM_gevnumainfo is used by the PV domain to get the information
> from hypervisor about vNUMA topology. Guest sends its memory sizes allocated
> for different vNUMA parameters and hypervisor fills it with topology.
> Future work to use this in HVM guests in the toolstack is required and
> in the hypervisor to allow HVM guests to use these hypercalls.
> 
> libxl
> 
> libxl allows us to define vNUMA topology in configuration file and verifies 
> that
> configuration is correct. libxl also verifies mapping of vnodes to pnodes and
> uses it in case of NUMA-machine and if automatic placement was disabled. In 
> case
> of incorrect/insufficient configuration, one vNUMA node will be initialized
> and populated with default values.
> 
Well, about automatic placement, I think we don't need to disable vNUMA
if it's enabled. In fact, automatic placement will try to place the
domain on one node only, and yes, if it manages to do so, no point
enabling vNUMA (unless the user asked for it, as you're saying). OTOH,
if automatic placement puts the domain on 2 or more nodes (e.g., because
the domain is 4G, and there is only 3G free on each node), then I think
vNUMA should chime in, and provide the guest with an appropriate,
internally built, NUMA topology.

> libxc
> 
> libxc builds the vnodes memory addresses for guest and makes necessary
> alignments to the addresses. It also takes into account guest e820 memory map
> configuration. The domain memory is allocated and vnode to pnode mapping
> is used to determine target node for particular vnode. If this mapping was not
> defined, it is not a NUMA machine or automatic NUMA placement is enabled, the
> default not node-specific allocation will be used.
> 
Ditto. However, automatic placement does not do much at the libxc level
right now, and I think that should continue to be the case.

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.