[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH 2/3] xend: Add multiple cpumasks support


  • To: "Ryan Harper" <ryanh@xxxxxxxxxx>
  • From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
  • Date: Mon, 14 Aug 2006 19:55:08 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 14 Aug 2006 11:55:34 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Aca/0hKmJIuPqxUARtWgm3p1vTRRZwAAMCUg
  • Thread-topic: [Xen-devel] [PATCH 2/3] xend: Add multiple cpumasks support

> > Adding support to enable separate masks for each VCPU isn't a bad
idea,
> > but we certainly don't want to break the behaviour of being able to
set
> > the mask for a domain.
> 
> This doesn't break the previous behavior though maybe the description
or
> implementation is misleading.  We may have dropped the behavior over
time
> as I seem to recall having a cpumap_t/cpumask in the domain structure,
> but there isn't a domain-wide cpumask anymore.  Instead there is a
> cpumask per vcpu.  The cpus parameter is used to restrict which
> physical cpus the domains' vcpus' can use.  This is done by mapping
> each vcpu to a value from the list of physical cpus the domain can
> use.  The side-effect of that is that the cpumask of the vcpu has
> only that cpu set, which prevents balancing when using the credit
> scheduler.

The current code doesn't do what the comment in the example config file
says. We should just fix the code to match the comment!

> Are you asking that we introduce in addition to the per-vcpu cpumask
> another domain-wide mask that we would use to further restrict the
vcpu
> masks (think cpus_and(d->affinity, v->affinity))?  And have two config
> variables like below?

There's no need: just set them all vcpus to the same mask.

Thanks,
Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.