This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] [PATCH 2/3] xend: Add multiple cpumasks support

To: "Ryan Harper" <ryanh@xxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH 2/3] xend: Add multiple cpumasks support
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Mon, 14 Aug 2006 19:55:08 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 14 Aug 2006 11:55:34 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Aca/0hKmJIuPqxUARtWgm3p1vTRRZwAAMCUg
Thread-topic: [Xen-devel] [PATCH 2/3] xend: Add multiple cpumasks support
> > Adding support to enable separate masks for each VCPU isn't a bad
> > but we certainly don't want to break the behaviour of being able to
> > the mask for a domain.
> This doesn't break the previous behavior though maybe the description
> implementation is misleading.  We may have dropped the behavior over
> as I seem to recall having a cpumap_t/cpumask in the domain structure,
> but there isn't a domain-wide cpumask anymore.  Instead there is a
> cpumask per vcpu.  The cpus parameter is used to restrict which
> physical cpus the domains' vcpus' can use.  This is done by mapping
> each vcpu to a value from the list of physical cpus the domain can
> use.  The side-effect of that is that the cpumask of the vcpu has
> only that cpu set, which prevents balancing when using the credit
> scheduler.

The current code doesn't do what the comment in the example config file
says. We should just fix the code to match the comment!

> Are you asking that we introduce in addition to the per-vcpu cpumask
> another domain-wide mask that we would use to further restrict the
> masks (think cpus_and(d->affinity, v->affinity))?  And have two config
> variables like below?

There's no need: just set them all vcpus to the same mask.


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>