Re: [Xen-devel] [PATCH 2/3] xend: Add multiple cpumasks support
* Ryan Harper <ryanh@xxxxxxxxxx> [2006-08-14 14:34]:
> * Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2006-08-14 13:56]:
> > > > Adding support to enable separate masks for each VCPU isn't a bad
> > idea,
> > > > but we certainly don't want to break the behaviour of being able to
> > set
> > > > the mask for a domain.
> > >
> > > This doesn't break the previous behavior though maybe the description
> > or
> > > implementation is misleading. We may have dropped the behavior over
> > time
> > > as I seem to recall having a cpumap_t/cpumask in the domain structure,
> > > but there isn't a domain-wide cpumask anymore. Instead there is a
> > > cpumask per vcpu. The cpus parameter is used to restrict which
> > > physical cpus the domains' vcpus' can use. This is done by mapping
> > > each vcpu to a value from the list of physical cpus the domain can
> > > use. The side-effect of that is that the cpumask of the vcpu has
> > > only that cpu set, which prevents balancing when using the credit
> > > scheduler.
> > The current code doesn't do what the comment in the example config file
> > says. We should just fix the code to match the comment!
> Certainly. I'll sync them up.
> > > Are you asking that we introduce in addition to the per-vcpu cpumask
> > > another domain-wide mask that we would use to further restrict the
> > vcpu
> > > masks (think cpus_and(d->affinity, v->affinity))? And have two config
> > > variables like below?
> > There's no need: just set them all vcpus to the same mask.
> OK. It seems like I went a step too far. I'll resend the simpler patch
> of just repeating the same mask for each vcpu in the domain. Are you
Here is the simple patch that applies the specified cpumask to each
vcpu. There is a bit of behavior change for sedf users as xen picks the
first bit in the mask when allocating vcpus leaving all of the domains'
vcpus on the first cpu in the mask and requires manual pinning to spread
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253 T/L: 678-9253
XendDomainInfo.py | 7 ++-----
1 files changed, 2 insertions(+), 5 deletions(-)
Signed-off-by: Ryan Harper <ryanh@xxxxxxxxxx>
# HG changeset patch
# User Ryan Harper <ryanh@xxxxxxxxxx>
# Date 1155405919 18000
# Node ID 83fd301be2d6ea464079044406c3815fd7ae0796
# Parent f328519053f5a444af475ec10dc8089a0b176e3f
Apply the domain cpumask to each vcpu rather than mapping vcpus to cpus in the
list. This is more inline with the comments for the cpus parameter and also
allows the credit scheduler to balance vcpus within the domain cpumask.
diff -r f328519053f5 -r 83fd301be2d6 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py Mon Aug 14 10:58:02 2006 +0100
+++ b/tools/python/xen/xend/XendDomainInfo.py Sat Aug 12 13:05:19 2006 -0500
@@ -1272,12 +1272,9 @@ class XendDomainInfo:
# repin domain vcpus if a restricted cpus list is provided
# this is done prior to memory allocation to aide in memory
# distribution for NUMA systems.
- cpus = self.info['cpus']
- if cpus is not None and len(cpus) > 0:
+ if self.info['cpus'] is not None and len(self.info['cpus']) > 0:
for v in range(0, self.info['max_vcpu_id']+1):
- # pincpu takes a list of ints
- cpu = [ int( cpus[v % len(cpus)] ) ]
- xc.vcpu_setaffinity(self.domid, v, cpu)
+ xc.vcpu_setaffinity(self.domid, v, self.info['cpus'])
# set domain maxmem in KiB
xc.domain_setmaxmem(self.domid, self.info['maxmem'] * 1024)
Xen-devel mailing list