WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 00/11] PV NUMA Guests

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 00/11] PV NUMA Guests
From: Dulloor <dulloor@xxxxxxxxx>
Date: Mon, 5 Apr 2010 23:51:57 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Mon, 05 Apr 2010 20:52:48 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=zoxLEFOtVwDHzJg0sd6M7IQppYSga26g0kt4AKYPYN8=; b=T+LoIvYLvGNgVR475Ugo0N2T0tyDeNOBp4cbd3uABOApcyDMmhCdSt83wYRQ7AokRi ExuK96Y78GbBCZwrVnqnma3FngUlK16w+9eMXmLYUR2MF0gHkvbzn+P/N0ODNGI/ha8d p6B0FHaiaWj4Aymfcy+5rVrJS+BR86+l9GxrI=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=mKZvYVmSop55OF74DoeRf/b8xoZzyTRMBy5EUFEFA7kRhYe/Y+Vqo9jD3iFGHrAsQo CpQHQip5t4dNqf60ArNr/+nfeG/UZJ7ceCsEoKf+1Q8dvSEq//crWlu7vnLmM383/pxE 3+E6bbpHuorB/VeB9sMF7Pv6GIpQLhNFnCdKU=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <0aec86b6-f895-4a73-989b-76ee5d5f3874@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <i2s940bcfd21004041230i36a89d07z81876daa0a344154@xxxxxxxxxxxxxx> <0aec86b6-f895-4a73-989b-76ee5d5f3874@default>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Dan, Sorry I missed one of your previous mails on the topic too, so I
have copied answers to those too.

> Could you comment on if/how these work when memory is more
> dynamically allocated (e.g. via an active balloon driver
> in a guest)?
The balloon driver is also made numa-aware and uses (the same)
enlightenment to derive the guest-node to physical-node mapping.
Please refer to my previously submitted patch for this
(http://old.nabble.com/Xen-devel--XEN-PATCH---Linux-PVOPS--ballooning-on-numa-domains-td26262334.html).
I intend to send out a refreshed patch once the basic guest numa is
checked in.

> Specifically, I'm wondering if you are running
> multiple domains, all are actively ballooning, and there
> is a mix of guest NUMA policies, how do you ensure that
> non-CONFINE'd domains don't starve a CONFINE'd domain?
We first try to CONFINE a domain and only then proceed to STRIPE or
SPLIT(if capable) the domain. So, in this (automatic) global domain
memory allocation scheme, there is no possibility of starvation from
memory pov. Hope I got your question right.

> I'd be interested in your thoughts on numa-aware tmem
> as well as the other dynamic memory mechanisms in Xen 4.0.
> Tmem is special in that it uses primarily full-page copies
> from/to tmem-space to/from guest-space so, assuming the
> interconnect can pipeline/stream a memcpy, overhead of
> off-node memory vs on-node memory should be less
> noticeable.  However tmem uses large data structures
> (rbtrees and radix-trees) and the lookup process might
> benefit from being NUMA-aware.
For the tmem, I was thinking of the ability to specify a set of nodes
from which the tmem-space memory is preferred which could be derived
from the domain's numa enlightenment, but as you mentioned the
full-page copy overhead is less noticeable (at least on my smaller
NUMA machine). But, the rate would determine if we should do this to
reduce inter-node traffic. What do you suggest ?  I was looking at the
data structures too.

> Also, I will be looking into adding some page-sharing
> techniques into tmem in the near future.  This (and the
> existing page sharing feature just added to 4.0) may
> create some other interesting challenges for NUMA-awareness.
I have just started reading up on the memsharing feature of Xen. I
would be glad to get your input on NUMA challenges over there.

thanks
dulloor


On Mon, Apr 5, 2010 at 10:52 AM, Dan Magenheimer
<dan.magenheimer@xxxxxxxxxx> wrote:
> Could you comment on if/how these work when memory is more
> dynamically allocated (e.g. via an active balloon driver
> in a guest)?  Specifically, I'm wondering if you are running
> multiple domains, all are actively ballooning, and there
> is a mix of guest NUMA policies, how do you ensure that
> non-CONFINE'd domains don't starve a CONFINE'd domain?
>
> Thanks,
> Dan
>
>> -----Original Message-----
>> From: Dulloor [mailto:dulloor@xxxxxxxxx]
>> Sent: Sunday, April 04, 2010 1:30 PM
>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Cc: Keir Fraser
>> Subject: [Xen-devel] [PATCH 00/11] PV NUMA Guests
>>
>> The set of patches implements virtual NUMA-enlightenment to support
>> NUMA-aware PV guests. In more detail, the patch implements the
>> following :
>>
>> * For the NUMA systems, the following memory allocation strategies are
>> implemented :
>>  - CONFINE : Confine the VM memory allocation to a single node. As
>> opposed to the current method of doing this in python, the patch
>> implements this in libxc(along with other strategies) and with
>> assurance that the memory actually comes from the selected node.
>> - STRIPE : If the VM memory doesn't fit in a single node and if the VM
>> is not compiled with guest-numa-support, the memory is allocated
>> striped across a selected max-set of nodes.
>> - SPLIT : If the VM memory doesn't fit in a single node and if the VM
>> is compiled with guest-numa-support, the memory is allocated split
>> (equally for now) from the min-set of nodes. The  VM is then made
>> aware of this NUMA allocation (virtual NUMA enlightenment).
>> -DEFAULT : This is the existing allocation scheme.
>>
>> * If the numa-guest support is compiled into the PV guest, we add
>> numa-guest-support to xen features elfnote. The xen tools use this to
>> determine if SPLIT strategy can be applied.
>>
>> * The PV guest uses the virtual NUMA enlightenment to setup its NUMA
>> layout (at the time of initmem_init)
>>
>> Please comment.
>>
>> -dulloor
>>
>> Signed-off-by: Dulloor Rao <dulloor@xxxxxxxxxx>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel