[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] x86/smt: Support for enabling/disabling SMT at runtime



>>> On 02.04.19 at 21:57, <andrew.cooper3@xxxxxxxxxx> wrote:
> Currently, a user can in combine the output of `xl info -n`, the APCI tables,
> and some manual CPUID data to figure out which CPU numbers to feed into
> `xen-hptool cpu-offline` to effectively disable SMT at runtime.
> 
> A more convenient option is to teach Xen how to perform this action.
> 
> First of all, extend XEN_SYSCTL_cpu_hotplug with two new operations.
> Introduce new smt_{up,down}_helper() functions which wrap the
> cpu_{up,down}_helper() helpers with logic which understands siblings based on
> their APIC_ID.
> 
> Add libxc stubs, and extend xen-hptool with smt-{enable,disable} options.
> These are intended to be shorthands for a loop over cpu-{online,offline}.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> ---
> CC: Jan Beulich <JBeulich@xxxxxxxx>
> CC: Wei Liu <wei.liu2@xxxxxxxxxx>
> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> 
> Slightly RFC.  I'm not very happy with the contination situation, but -EBUSY
> is the preexisting style and it seems like it is the only option from tasklet
> context.

Well, offloading the re-invocation to the caller isn't really nice.
Looking at the code, is there any reason why couldn't use
the usual -ERESTART / hypercall_create_continuation()? This
would require a little bit of re-work, in particular to allow
passing the vCPU into hypercall_create_continuation(), but
beyond that I can't see any immediate obstacles. Though
clearly I wouldn't make this a prereq requirement for the work
here.

> Is it intentional that we can actually online and offline processors beyond
> maxcpu?  This is a consequence of the cpu parking logic.

I think so, yes. That's meant to be a boot time limit only imo.
The runtime limit is nr_cpu_ids.

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -60,7 +60,7 @@ static bool __initdata opt_nosmp;
>  boolean_param("nosmp", opt_nosmp);
>  
>  /* maxcpus: maximum number of CPUs to activate. */
> -static unsigned int __initdata max_cpus;
> +unsigned int max_cpus;
>  integer_param("maxcpus", max_cpus);

As per above I don't think this change should be needed or
wanted, but if so for whatever reason, wouldn't the variable
better be __read_mostly?

> --- a/xen/arch/x86/sysctl.c
> +++ b/xen/arch/x86/sysctl.c
> @@ -114,6 +114,92 @@ long cpu_down_helper(void *data)
>      return ret;
>  }
>  
> +static long smt_up_helper(void *data)
> +{
> +    unsigned int cpu, sibling_mask =
> +        (1u << (boot_cpu_data.x86_num_siblings - 1)) - 1;

I don't think this is quite right for higher than 2-thread configurations.
In detect_extended_topology() terms, don't you simply mean
(1u << ht_mask_width) - 1 here, i.e. just
boot_cpu_data.x86_num_siblings - 1 (without any shifting)?

> +    int ret = 0;
> +
> +    if ( !cpu_has_htt || !sibling_mask )
> +        return -EOPNOTSUPP;

Why not put the first part of the check right into the sysctl
handler?

> +    opt_smt = true;

Perhaps also bail early when the variable already has the
designated value? And again perhaps right in the sysctl
handler?

> +    for_each_present_cpu ( cpu )
> +    {
> +        if ( cpu == 0 )
> +            continue;

Is this special case really needed? If so, perhaps worth a brief
comment?

> +        if ( cpu >= max_cpus )
> +            break;
> +
> +        if ( x86_cpu_to_apicid[cpu] & sibling_mask )
> +            ret = cpu_up_helper(_p(cpu));

Shouldn't this be restricted to CPUs a sibling of which is already
online? And widened at the same time, to also online thread 0
if one of the other threads is already online?

Also any reason you use _p() here but not in patch 2?

> +static long smt_down_helper(void *data)
> +{
> +    unsigned int cpu, sibling_mask =
> +        (1u << (boot_cpu_data.x86_num_siblings - 1)) - 1;
> +    int ret = 0;
> +
> +    if ( !cpu_has_htt || !sibling_mask )
> +        return -EOPNOTSUPP;
> +
> +    opt_smt = false;
> +
> +    for_each_present_cpu ( cpu )
> +    {
> +        if ( cpu == 0 )
> +            continue;
> +        if ( cpu >= max_cpus )
> +            break;
> +
> +        if ( x86_cpu_to_apicid[cpu] & sibling_mask )
> +            ret = cpu_down_helper(_p(cpu));

Similarly here, wouldn't you better skip this if it would offline
the last thread of a core?

I also notice that the two functions are extremely similar, and
hence it might be worthwhile considering to fold them, with the
caller controlling the behavior via the so far unused function
parameter (at which point the related remark of mine on patch
2 would become inapplicable).

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -246,8 +246,17 @@ struct xen_sysctl_get_pmstat {
>  struct xen_sysctl_cpu_hotplug {
>      /* IN variables */
>      uint32_t cpu;   /* Physical cpu. */
> +
> +    /* Single CPU enable/disable. */
>  #define XEN_SYSCTL_CPU_HOTPLUG_ONLINE  0
>  #define XEN_SYSCTL_CPU_HOTPLUG_OFFLINE 1
> +
> +    /*
> +     * SMT enable/disable. Caller must zero the 'cpu' field to begin, and
> +     * ignore it on completion.
> +     */
> +#define XEN_SYSCTL_CPU_HOTPLUG_SMT_ENABLE  2
> +#define XEN_SYSCTL_CPU_HOTPLUG_SMT_DISABLE 3

Is the "cpu" field constraint mentioned in the comment just a
precaution? I can't see you encode anything into that field, or
use it upon getting re-invoked. I assume that's because of the
expectation that only actual onlining/offlining would potentially
take long, while iterating over all present CPUs without further
action ought to be fast enough.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.