WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 2/3] xend: Add multiple cpumasks support

To: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 2/3] xend: Add multiple cpumasks support
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Mon, 14 Aug 2006 14:08:46 -0500
Cc: Ryan Harper <ryanh@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 14 Aug 2006 12:11:34 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D5723C2@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D5723C2@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2006-08-14 13:56]:
> > > Adding support to enable separate masks for each VCPU isn't a bad
> idea,
> > > but we certainly don't want to break the behaviour of being able to
> set
> > > the mask for a domain.
> > 
> > This doesn't break the previous behavior though maybe the description
> or
> > implementation is misleading.  We may have dropped the behavior over
> time
> > as I seem to recall having a cpumap_t/cpumask in the domain structure,
> > but there isn't a domain-wide cpumask anymore.  Instead there is a
> > cpumask per vcpu.  The cpus parameter is used to restrict which
> > physical cpus the domains' vcpus' can use.  This is done by mapping
> > each vcpu to a value from the list of physical cpus the domain can
> > use.  The side-effect of that is that the cpumask of the vcpu has
> > only that cpu set, which prevents balancing when using the credit
> > scheduler.
> 
> The current code doesn't do what the comment in the example config file
> says. We should just fix the code to match the comment!

Certainly.  I'll sync them up.

> 
> > Are you asking that we introduce in addition to the per-vcpu cpumask
> > another domain-wide mask that we would use to further restrict the
> vcpu
> > masks (think cpus_and(d->affinity, v->affinity))?  And have two config
> > variables like below?
> 
> There's no need: just set them all vcpus to the same mask.

OK.  It seems like I went a step too far.  I'll resend the simpler patch
of just repeating the same mask for each vcpu in the domain.  Are you
interested in the multi cpumask approach?  If so any thoughts on how
you'd like to see multiple cpumasks in the config file?

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>