[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH SpectreV1+L1TF v5 4/9] spec: add l1tf-barrier

>>> On 05.02.19 at 15:23, <nmanthey@xxxxxxxxx> wrote:
> On 1/31/19 17:35, Jan Beulich wrote:
>>>>> On 29.01.19 at 15:43, <nmanthey@xxxxxxxxx> wrote:
>>> @@ -1942,6 +1942,12 @@ Irrespective of Xen's setting, the feature is 
> virtualised for HVM guests to
>>>  use.  By default, Xen will enable this mitigation on hardware believed to 
>>> be
>>>  vulnerable to L1TF.
>>> +On hardware vulnerable to L1TF, the `l1tf-barrier=` option can be used to 
>>> force
>>> +or prevent Xen from protecting evaluations inside the hypervisor with a 
>>> barrier
>>> +instruction to not load potentially secret information into L1 cache.  By
>>> +default, Xen will enable this mitigation on hardware believed to be 
>>> vulnerable
>>> +to L1TF.
>> ... and having SMT enabled, since aiui this is a non-issue without.
> In case flushing the L1 cache is not enabled, that is still an issue,
> because the transition guest -> hypervisor -> guest would allow to
> retrieve hypervisor data from the cache still. Do you want me to extend
> the logic to consider L1 cache flushing as well?

Well, I wouldn't be overly concerned of people disabling it from the
command line, but being kind to people without updated microcode
is perhaps a good idea.

>>> @@ -100,6 +102,7 @@ static int __init parse_spec_ctrl(const char *s)
>>>              opt_ibpb = false;
>>>              opt_ssbd = false;
>>>              opt_l1d_flush = 0;
>>> +            opt_l1tf_barrier = 0;
>>>          }
>>>          else if ( val > 0 )
>>>              rc = -EINVAL;
>> Is this really something we want "spec-ctrl=no-xen" to disable?
>> It would seem to me that this should be restricted to "spec-ctrl=no".
> I have no strong opinion here. If you ask me to move it somewhere else,
> I will do that. I just want to make sure it's disable in case
> speculation mitigations should be disabled.

Unless anyone else voices a different opinion, I'd like to see it
moved as suggested.

>>> @@ -843,6 +849,14 @@ void __init init_speculation_mitigations(void)
>>>          opt_l1d_flush = cpu_has_bug_l1tf && !(caps & ARCH_CAPS_SKIP_L1DFL);
>>>      /*
>>> +     * By default, enable L1TF_VULN on L1TF-vulnerable hardware
>>> +     */
>> This ought to be a single line comment.
> Will fix.
>>> +    if ( opt_l1tf_barrier == -1 )
>>> +        opt_l1tf_barrier = cpu_has_bug_l1tf;
>> At the very least opt_smt should be taken into account here. But
>> I guess this setting of the default may need to be deferred
>> further, until the topology of the system is known (there may
>> not be any hyperthreads after all).
> Again, cache flushing also has to be considered. So, I would like to
> keep it like this for now.

With the "for now" aspect properly explained in the description,
I guess that would be fine as a first step.

>>> +    if ( cpu_has_bug_l1tf && opt_l1tf_barrier > 0)
>>> +        setup_force_cpu_cap(X86_FEATURE_SC_L1TF_VULN);
>> Why the left side of the &&?
> IMHO, the CPU flag L1TF should only be set when the CPU is reported to
> be vulnerable, even if the command line wants to enforce mitigations.

What's the command line option good for if it doesn't trigger
patching in of the LFENCEs? Command line options exist, among
other purposes, to aid mitigating flaws in our determination of
what is a vulnerable platform.

>>> +    /*
>>>       * We do not disable HT by default on affected hardware.
>>>       *
>>>       * Firstly, if the user intends to use exclusively PV, or HVM shadow
>> Furthermore, as per the comment and logic here and below a
>> !HVM configuration ought to be safe too, unless "pv-l1tf=" was
>> used (in which case we defer to the admin anyway), so it's
>> questionable whether the whole logic should be there in the
>> first place in this case. This would then in particular keep all
>> of this out for the PV shim.
> For the PV shim, I could add pv-shim to my check before enabling the CPU
> flag.

But the PV shim is just a special case. I'd like this code to be
compiled out for all !HVM configurations.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.