[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a runqueue



On Wed, 2020-05-27 at 08:17 +0200, Jan Beulich wrote:
> On 27.05.2020 00:00, Dario Faggioli wrote:
> > 
> > Cache oriented runqueue organization will be the subject of another
> > patch series, and that's why I kept them out. However, that's a
> > rather
> > special case with a lot in common to SMT...
> 
> I didn't think of cache sharing in particular, but about the
> concept of compute units vs hyperthreads in general.
> 
Ok.

> > Just in case, is there a
> > way to identify them easily, like with a mask or something, in the
> > code
> > already?
> 
> cpu_sibling_mask still gets used for both, so there's no mask
> to use. As per set_cpu_sibling_map() you can look at
> cpu_data[].compute_unit_id to tell, but that's of course x86-
> specific (as is the entire compute unit concept).
> 
Right. And thanks for the pointers.

But then, what I am understanding by having a look there is that I
indeed can use (again, appropriately wrapped) x86_num_siblings, for
telling, in this function, whether a CPU has any, and if yes how many,
HT (Intel) or CU (AMD) siblings in total, although some of them may
currently be offline.

Which means I will be treating HTs and CUs the same which, thinking
more about it (and thinking actually to CUs, rather than to any cache
sharing relationship), does make sense for this feature.

Does this make sense, or am I missing or misinterpreting anything?

> > So I think I will demote this printk as a XENLOG_DEBUG one (and
> > will
> > also get rid of others that were already DEBUG, but not super
> > useful,
> > after some more thinking).
> 
> Having seen Jürgen's reply as well as what you further wrote below,
> I'd still like to point out that even XENLOG_DEBUG isn't quiet
> enough: There may be some value to such a debugging message to you,
> but it may be mainly spam to e.g. me, who I still have a need to
> run with loglvl=all in the common case. Let's not forget, the
> context in which the underlying topic came up in was pretty-many-
> core AMD CPUs.
> 
Good point indeed about DEBUG potentially being an issue as well. So
yes, as announced in my reply to Juergen, I was going with the recap in
cpupool_init().

However, that looks like it requires a new hook in the scheduler's
interface, as the information is scheduler specific but at the same
time I don't think we want to have the exact same information from
either dump_settings() or dump_cpu_state(), which is all we have... :-/

I'll see about it.

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

Attachment: signature.asc
Description: This is a digitally signed message part


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.