[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm: Park CPUs with a MIDR different from the boot CPU.



On 1 February 2018 at 19:37, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote:
> On Tue, 30 Jan 2018, Julien Grall wrote:
>> Xen does not properly support big.LITTLE platform. All vCPUs of a guest
>> will always have the MIDR of the boot CPU (see arch_domain_create).
>> At best the guest may see unreliable performance (vCPU switching between
>> big and LITTLE), at worst the guest will become unreliable or insecure.
>>
>> This is becoming more apparent with branch predictor hardening in Linux
>> because they target a specific kind of CPUs and may not work on other
>> CPUs.
>>
>> For the time being, park any CPUs with a MDIR different from the boot
>> CPU. This will be revisited in the future once Xen gains understanding
>> of big.LITTLE.
>>
>> [1] 
>> https://lists.xenproject.org/archives/html/xen-devel/2016-12/msg00826.html
>>
>> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
>>
>> ---
>>
>> We probably want to backport this as part of XSA-254. Using big.LITTLE
>> on Xen has never been supported but we didn't make it clearly. This is
>> becoming more apparent with code targeting specific CPUs.
>> ---
>>  xen/arch/arm/smpboot.c | 15 +++++++++++++++
>>  1 file changed, 15 insertions(+)
>>
>> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
>> index 1255185a9c..2c2815f9ee 100644
>> --- a/xen/arch/arm/smpboot.c
>> +++ b/xen/arch/arm/smpboot.c
>> @@ -292,6 +292,21 @@ void start_secondary(unsigned long boot_phys_offset,
>>
>>      init_traps();
>>
>> +    /*
>> +     * Currently Xen assumes the platform has only one kind of CPUs.
>> +     * This assumption does not hold on big.LITTLE platform and may
>> +     * result to unstability. Better to park them for now.
>> +     *
>> +     * TODO: Add big.LITTLE support.
>> +     */
>> +    if ( current_cpu_data.midr.bits != boot_cpu_data.midr.bits )
>> +    {
>> +        printk(XENLOG_ERR "CPU%u MIDR (0x%x) does not match boot CPU MIDR 
>> (0x%x).\n",
>> +               smp_processor_id(), current_cpu_data.midr.bits,
>> +               boot_cpu_data.midr.bits);
>> +        stop_cpu();
>> +    }
>
> I understand that this patch is the right thing to do from a correctness
> perspective, especially in regards to the SP2 mitigation.
>
> At the same time I would also like to give the option for people that
> want to use big.LITTLE with cpupools / cpu pinning to do so if they
> really want to, but I am not sure what to suggest.
>
> Could we introduce a command line to proceed anyway? But then the system
> would be susceptible to SP2 in the cpus different from the boot cpu.
> Could we make the SP2 mitigation work on big.LITTLE or is it too much
> trouble? Do you have any other ideas or thoughts about this?

This patch is here to prevent to spread instability/insecurity or give
the feeling we do support big.LITTLE.

Even outside of SP2, there are possibility for instability because CPU errata
would not be applied correctly in the guest or because Xen is not able to
know that non CPUs may have a different cacheline size...

I want to end this idea that Xen may support big.LITTLE.

The first thing to modify is the vpdir (virtual MIDR), at the moment we always
use the boot MIDR. What would you choose now? The MIDR of the CPU where
the hypercall happen?

There is no shortcut for big.LITTLE. The right thing is to implement what has
been discussed in the design document written by Dario. But that's a new
feature and would require some work to do it properly.

A command line option might be a good idea, but I would be more of the opinion
to delay that and see who is screaming about it.

My hunch is not many people will scream because today they tend to disable
one set of CPUs in the DT directly.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.