[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/7] xen/arm: vsmc: Implement SMCCC_ARCH_WORKAROUND_1 BP hardening support



Hi,

On 6 February 2018 at 20:12, Julien Grall <julien.grall@xxxxxxx> wrote:
> On 02/06/2018 04:23 PM, Volodymyr Babchuk wrote:
>>
>> Hi,
>
>
> Hi,
>
>> On 5 February 2018 at 15:20, Julien Grall <julien.grall@xxxxxxx> wrote:
>>>
>>> SMCCC 1.1 offers firmware-based CPU workarounds. In particular,
>>> SMCCC_ARCH_WORKAROUND_1 provides BP hardening for variant 2 of XSA-254
>>> (CVE-2017-5715).
>>>
>>> If the hypervisor has some mitigation for this issue, report that we
>>> deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the hypervisor
>>> workaround on every guest exit.
>>
>> Just to be sure: is there some way to disable this workaround?
>
>
> In which context? If the platform does not have any processor affected by
> variant 2, then the workaround will not be enabled.
Yes, right. I missed CPU caps check below.

> In case of Linux, this workaround will only be called on affected
> processors.
>
>
>>
>>
>>>
>>> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>

>>> ---
>>>   xen/arch/arm/vsmc.c         | 22 ++++++++++++++++++++--
>>>   xen/include/asm-arm/smccc.h |  6 ++++++
>>>   2 files changed, 26 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
>>> index a708aa5e81..144a1cd761 100644
>>> --- a/xen/arch/arm/vsmc.c
>>> +++ b/xen/arch/arm/vsmc.c
>>> @@ -18,6 +18,7 @@
>>>   #include <xen/lib.h>
>>>   #include <xen/types.h>
>>>   #include <public/arch-arm/smccc.h>
>>> +#include <asm/cpufeature.h>
>>>   #include <asm/monitor.h>
>>>   #include <asm/regs.h>
>>>   #include <asm/smccc.h>
>>> @@ -93,8 +94,25 @@ static bool handle_arch(struct cpu_user_regs *regs)
>>>           return true;
>>>
>>>       case ARM_SMCCC_ARCH_FEATURES_FID:
>>> -        /* Nothing supported yet */
>>> -        set_user_reg(regs, 0, -1);
>>> +    {
>>> +        uint32_t arch_func_id = get_user_reg(regs, 1);
>>> +        int ret = -1;
>>
>> I think that register_t will suit better in this case.
>
>
> Well no. The return in the spec is int32 and will fit in w0. register_t is
> either 32-bit or 64-bit. So int is the right type here.
Ah, yes, you are right.

>
>>
>>> +
>>> +        switch ( arch_func_id )
>>> +        {
>>> +        case ARM_SMCCC_ARCH_WORKAROUND_1_FID:
>>> +            if ( cpus_have_cap(ARM_HARDEN_BRANCH_PREDICTOR) )
>>> +                ret = 0;
>>> +            break;
>>> +        }
>>> +
>>> +        set_user_reg(regs, 0, ret);
>>> +
>>> +        return true;
>>> +    }
>>> +
>>> +    case ARM_SMCCC_ARCH_WORKAROUND_1_FID:
>>> +        /* No return value */
>>>           return true;
>>>       }
>>>
>>> diff --git a/xen/include/asm-arm/smccc.h b/xen/include/asm-arm/smccc.h
>>> index 431389c118..b790fac17c 100644
>>> --- a/xen/include/asm-arm/smccc.h
>>> +++ b/xen/include/asm-arm/smccc.h
>>> @@ -115,6 +115,12 @@ static inline uint32_t smccc_get_owner(register_t
>>> funcid)
>>>                          ARM_SMCCC_OWNER_ARCH,        \
>>>                          0x1)
>>>
>>> +#define ARM_SMCCC_ARCH_WORKAROUND_1_FID             \
>>> +    ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,         \
>>> +                      ARM_SMCCC_CONV_32,            \
>>> +                      ARM_SMCCC_OWNER_ARCH,         \
>>> +                      0x8000)
>>> +
>>>   /* Only one error code defined in SMCCC */
>>>   #define ARM_SMCCC_ERR_UNKNOWN_FUNCTION  (-1)
>>>
>>> --
>>> 2.11.0
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxxx
>>> https://lists.xenproject.org/mailman/listinfo/xen-devel
>
>
> Cheers,
>
> --
> Julien Grall



-- 
WBR Volodymyr Babchuk aka lorc [+380976646013]
mailto: vlad.babchuk@xxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.