[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 13/60] xen/sched: move some per-vcpu items to struct sched_unit



On Tue, 2019-07-02 at 07:54 +0000, Jan Beulich wrote:
> On 02.07.2019 08:30, Juergen Gross wrote:
> > On 01.07.19 17:46, Jan Beulich wrote:
> > > 
> > > Hmm, that's indeed what I was deducing, but how will we sell this
> > > to people actually fiddling with vCPU affinities? I foresee
> > > getting
> > > bug reports that the respective xl command(s) do(es)n't do
> > > anymore
> > > what it used to do.
> > 
> > The new behavior must be documented, sure.
> 
> Documentation is just one aspect. Often enough people only read docs
> when wanting to introduce new functionality, which I consider a fair
> model. Such people will be caught by surprise that the pinning
> behavior does not work the same way anymore.
> 
That is indeed the case, and we need to think about how to address it,
I agree.

> And again - if someone pins every vCPU to a single pCPU, that last
> such pinning operation will be what takes long term effect. Aiui all
> vCPU-s in the unit will then be pinned to that one pCPU, i.e.
> they'll either all compete for the one pCPU's time, or only one of
> them will ever get scheduled.
> 
I'm not sure I'm getting this. On an, say, SMT system, with 4 threads
per core, a unit is 4 vCPUs and a pCPU is 4 threads.

If we pin all the 4 vCPUs of a unit to one 4 thread pCPU, each vCPU
will get a thread.

Isn't it so?

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.