[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity()


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
  • Date: Tue, 24 Jan 2012 10:56:35 +0100
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 24 Jan 2012 09:57:09 +0000
  • Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=v1xx3x45RBr1Qyyu1PyqhpZH0rvIM59yrqR2xHn51AyiDKoOb6vcsG53 ogSJ5f/MTD5pt61u5IvuH4Y0XlpAuF4mWDYQ25FOZeVH5pFTICry/pWiD EuvY5RPVI/q2GGdoz1XEGW/Eb4a3LKM1gE7ZZPH38QdilpjLlbblR0q1Q LelfCTsxXa6XLavvy5JxzMagFX3z8ysd176RJaPdWEmkrf+DLy1RiBQ1n /u9+YrGk6jYZdQD1j/mIXsuekPysW;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On 01/24/2012 10:33 AM, Ian Campbell wrote:
On Tue, 2012-01-24 at 05:54 +0000, Juergen Gross wrote:
# HG changeset patch
# User Juergen Gross<juergen.gross@xxxxxxxxxxxxxx>
# Date 1327384410 -3600
# Node ID 08232960ff4bed750d26e5f1ff53972fee9e0130
# Parent  99f98e501f226825fbf16f6210b4b7834dff5df1
switch to dynamically allocated cpumask in
domain_update_node_affinity()

cpumasks should rather be allocated dynamically.

Signed-off-by: juergen.gross@xxxxxxxxxxxxxx

diff -r 99f98e501f22 -r 08232960ff4b xen/common/domain.c
--- a/xen/common/domain.c       Tue Jan 24 06:53:06 2012 +0100
+++ b/xen/common/domain.c       Tue Jan 24 06:53:30 2012 +0100
@@ -333,23 +333,27 @@ struct domain *domain_create(

  void domain_update_node_affinity(struct domain *d)
  {
-    cpumask_t cpumask;
+    cpumask_var_t cpumask;
      nodemask_t nodemask = NODE_MASK_NONE;
      struct vcpu *v;
      unsigned int node;

-    cpumask_clear(&cpumask);
+    if ( !zalloc_cpumask_var(&cpumask) )
+        return;
If this ends up always failing we will never set node_affinity to
anything at all. Granted that is already a pretty nasty situation to be
in but perhaps setting the affinity to NODE_MASK_ALL on failure is
slightly more robust?

Hmm, I really don't know.

node_affinity is only used in alloc_heap_pages(), which will fall back to other
nodes if no memory is found on those nodes.

OTOH this implementation might change in the future.

The question is whether node_affinity should rather contain a subset or a
superset of the nodes the domain is running on.

What should be done if allocating a cpumask fails later? Should node_affinity
be set to NODE_MASK_ALL/NONE or should it be left untouched assuming a real
change is a rare thing to happen?


Juergen

+
      spin_lock(&d->node_affinity_lock);

      for_each_vcpu ( d, v )
-        cpumask_or(&cpumask,&cpumask, v->cpu_affinity);
+        cpumask_or(cpumask, cpumask, v->cpu_affinity);

      for_each_online_node ( node )
-        if ( cpumask_intersects(&node_to_cpumask(node),&cpumask) )
+        if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
              node_set(node, nodemask);

      d->node_affinity = nodemask;
      spin_unlock(&d->node_affinity_lock);
+
+    free_cpumask_var(cpumask);
  }



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel




--
Juergen Gross                 Principal Developer Operating Systems
PDG ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.