[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/5] x86: allow specifying the NUMA nodes Dom0 should run on



On Fri, 2015-02-27 at 10:50 +0000, Jan Beulich wrote:
> >>> On 27.02.15 at 11:04, <dario.faggioli@xxxxxxxxxx> wrote:
> > On Fri, 2015-02-27 at 08:46 +0000, Jan Beulich wrote:

> >> This way behavior doesn't change if internally in the hypervisor we
> >> need to change the mapping from PXMs to node IDs.
> >> 
> > Ok, I see the value of this. I'm still a bit concerned about the fact
> > that everything else "speak" NUMA node, but it's probably just me being
> > much more used to that than to PXMs. :-)
> 
> With "everything else" I suppose you mean the tool stack? There
> shouldn't be any node IDs kept across reboots there. Yet the
> consistent behavior to be achieved here is particularly for multiple
> boots.
> 
Sure. I was more thinking to inconsistency "in the user mind", as he'll
have to deal with PXM when configuring Dom0, and with node IDs after
boot... but again, maybe it's only me.

> >> I'm simply adjusting what sched_init_vcpu() did, which is alter
> >> hard affinity conditionally on is_pinned and soft affinity
> >> unconditionally.
> >> 
> > Ok, I understand the idea behing this better now, thanks.
> > [...]
> > Setting soft affinity as a superset of (in the former case) or equal to
> > (in the latter) hard affinity is just pure overhead, when in the
> > scheduler.
> 
> The why does sched_init_vcpu() do what it does? If you want to
> alter that, I'm fine with altering it here.
> 
It does that, but, in there, soft affinity is unconditionally set to
'all bits set'. Then, in the scheduler, if we find out that the the soft
affinity mask is fully set, we just skip the soft affinity balancing
step.

The idea is that, whether the mask is full because no one touched this
default, or because it has been manually set like that, there is nothing
to do at the soft affinity balancing level.

So, you actually are right: rather that not touch soft affinity, as I
said in the previous email, I think we should set hard affinity
conditionally to is_pinned, as in the patch, and then unconditionally
set soft affinity to all, as in sched_init_vcpu().

> > Then, if we want to make it possible to tweak soft affinity, we can
> > allow for something like "dom0_nodes=soft:1,3" and, in that case, alter
> > soft affinity only.
> 
> Hmm, not sure. And I keep being confused whether soft means
> "allow" and hard means "prefer" or the other way around. 
>
"hard" means allow (or not allow)
"soft" means prefer

> In any
> event, again, with sched_init_vcpu() setting up things so that
> soft is a superset of hard (and most likely they're equal), I don't
> see why the same done here would be more of a problem.
> 
Indeed, sorry, my bad. When talking about soft being superset, I forgot
to mention the sort of special casing we are granting to the case when
soft mask is all set.

Using cpumask_setall here, as done in sched_init_vcpu(), would avoid
incurring in the pointless soft affinity balancing overhead.

Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.