WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH 0/4] [HVM][RFC] NUMA support in HVM guests

To: "Xu, Anthony" <anthony.xu@xxxxxxxxx>, "Andre Przywara" <andre.przywara@xxxxxxx>
Subject: RE: [Xen-devel] [PATCH 0/4] [HVM][RFC] NUMA support in HVM guests
From: "Duan, Ronghui" <ronghui.duan@xxxxxxxxx>
Date: Fri, 23 Nov 2007 16:42:32 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 23 Nov 2007 00:43:50 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <51CFAB8CB6883745AE7B93B3E084EBE2010B7623@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcfxTv3SqiNT2hnYTruUAIr8OkLMJQB9o2GwDpY/zcA=
Thread-topic: [Xen-devel] [PATCH 0/4] [HVM][RFC] NUMA support in HVM guests
Hi Andre,

I read your patches and Anthony's commands. Write a patch based on

1:    If guest set numanodes=n (default it will be 1 means that this
guest           will be restricted in one node); hypervisor will choose
begin node to   pin for this guest use round robin. But the method I use
need a  spin_lock to prevent create domain at same time. Are there any
more    good methods, hope for your suggestion.
2:      pass node parameter use higher bits in flags when create domain.
At      this time, domain can record node information in domain struct
for     further use, i.e. show which node to pin when setup_guest.    
        If use this method, in your patch, can simply balance nodes just
like    below;

> +    for (i=0;i<=dominfo.max_vcpu_id;i++)
> +    {
> +        node= ( i * numanodes ) / (dominfo.max_vcpu_id+1)+           
> +             domaininfo.first_node;
> +        xc_vcpu_setaffinity (xc_handle, dom, i, nodemasks[node]);
> +    }
>
        BTW: I can't find your mail of Patch 2/4: introduce CPU affinity
for     allocate_physmap call, so I can't add your patch on source.

I just begin my "NUMA trip", appreciate you suggestions. Thanks.

Best Regards
Ronghui

-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Xu, Anthony
Sent: Monday, September 10, 2007 9:14 AM
To: Andre Przywara
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-devel] [PATCH 0/4] [HVM] NUMA support in HVM guests

Andre

>>
>> This always starts from node0, this may make node0 very busy, while
other
>nodes may not have many work.
>This is true, I encountered this before, but didn't want to wait longer
>for sending up the patches. Actually the "numanodes=n" config file
>option shouldn't specify the number of nodes, but a list of specific
>nodes to use, like "numanodes=0,2" to pin the domain on the first and
>the third node.

That's a good idea to specify the nodes to use,
We can use "numamodes=0,2" in configure file, and it will be converted
into bitmap long numamodes, every bit indicates one node.
When guest doesn't specify "numamodes", XEN will need to choose proper
nodes for guest. So XEN also needs to implement some algorithm to choose
proper nodes.


>> We also need to add some limitations for numanodes. The number of
vcpus
>on vnode should not be larger
> >than the number of pcpus on pnode. Otherwise vcpus belonging to a
>domain run
> > on the same pcpu, which is not what we want.
>Would be nice, but in the moment I would push this into the sysadmin's
>responsibility.
It's reasonable.


>After all my patches were more a discussion base than a final solution,
>so I see there is more work to do. In the moment I am working on
>including PV guests.
>
That's a very good start for support guest NUMA.



Regards
- Anthony

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>