| 
    
 [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 3/8] x86/svm: VMEntry/Exit logic for MSR_SPEC_CTRL
 On 26/01/2022 16:50, Jan Beulich wrote:
> On 26.01.2022 09:44, Andrew Cooper wrote:
>> 1) It would be slightly more efficient to pass curr and cpu_info into
>>    vm{entry,exit}_spec_ctrl(), but setup of such state can't be in the
>>    ALTERNATIVE block because then the call displacement won't get fixed up.
>>    All the additional accesses are hot off the stack, so almost certainly
>>    negligible compared to the WRMSR.
> What's wrong with using two instances of ALTERNATIVE, one to setup the
> call arguments and the 2nd for just the CALL?
Hmm
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index c718328ac4cf..1d4be7e97ae2 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -59,6 +59,7 @@ __UNLIKELY_END(nsvm_hap)
 
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
         /* SPEC_CTRL_EXIT_TO_SVM       Req:                          
Clob: C   */
+        ALTERNATIVE "", __stringify(mov %rbx, %rdi; mov %rsp, %rsi),
X86_FEATURE_SC_MSR_HVM
         ALTERNATIVE "", __stringify(call vmentry_spec_ctrl),
X86_FEATURE_SC_MSR_HVM
 
         pop  %r15
is somewhat of a long line, but isn't too terrible.
I'm tempted to switch back to using STR() seeing as we have both and it
is much more concise.
>> --- a/xen/arch/x86/hvm/svm/entry.S
>> +++ b/xen/arch/x86/hvm/svm/entry.S
>> @@ -55,11 +55,11 @@ __UNLIKELY_END(nsvm_hap)
>>          mov  %rsp, %rdi
>>          call svm_vmenter_helper
>>  
>> -        mov VCPU_arch_msrs(%rbx), %rax
>> -        mov VCPUMSR_spec_ctrl_raw(%rax), %eax
>> +        clgi
>>  
>>          /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
>> -        /* SPEC_CTRL_EXIT_TO_SVM   (nothing currently) */
>> +        /* SPEC_CTRL_EXIT_TO_SVM       Req:                           Clob: 
>> C   */
>> +        ALTERNATIVE "", __stringify(call vmentry_spec_ctrl), 
>> X86_FEATURE_SC_MSR_HVM
> I guess the new upper case C after Clob: stands for "all call-clobbered
> registers"?
That was the intention, yes.
>  In which case ...
>
>> @@ -86,8 +85,9 @@ __UNLIKELY_END(nsvm_hap)
>>  
>>          GET_CURRENT(bx)
>>  
>> -        /* SPEC_CTRL_ENTRY_FROM_SVM    Req: b=curr %rsp=regs/cpuinfo, Clob: 
>> ac  */
>> +        /* SPEC_CTRL_ENTRY_FROM_SVM    Req:                           Clob: 
>> ac,C */
>>          ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
>> +        ALTERNATIVE "", __stringify(call vmexit_spec_ctrl), 
>> X86_FEATURE_SC_MSR_HVM
> ... why the explicit further "ac" here? Is the intention to annotate
> every individual ALTERNATIVE this way?
Fair point.  I'll switch to just C.
The clobbers are rather more important for the PV side where the logic
has multiple live variables and it's not totally obvious that all GPRs
are available.
>
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -3086,6 +3086,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>      vmcb_set_vintr(vmcb, intr);
>>  }
>>  
>> +/* Called with GIF=0. */
>> +void vmexit_spec_ctrl(void)
>> +{
>> +    struct cpu_info *info = get_cpu_info();
>> +    unsigned int val = info->xen_spec_ctrl;
>> +
>> +    /*
>> +     * Write to MSR_SPEC_CTRL unconditionally, for the RAS[:32] flushing 
>> side
>> +     * effect.
>> +     */
>> +    wrmsr(MSR_SPEC_CTRL, val, 0);
>> +    info->last_spec_ctrl = val;
>> +}
>> +
>> +/* Called with GIF=0. */
>> +void vmentry_spec_ctrl(void)
>> +{
>> +    struct cpu_info *info = get_cpu_info();
>> +    const struct vcpu *curr = current;
>> +    unsigned int val = curr->arch.msrs->spec_ctrl.raw;
>> +
>> +    if ( val != info->last_spec_ctrl )
>> +    {
>> +        wrmsr(MSR_SPEC_CTRL, val, 0);
>> +        info->last_spec_ctrl = val;
>> +    }
> Is this correct for the very first use on a CPU? last_spec_ctrl
> starts out as zero afaict, and hence this very first write would be
> skipped if the guest value is also zero (which it will be for a
> vCPU first launched), even if we have a non-zero value in the MSR
> at that point.
Ish.
We intentionally write MSR_SPEC_CTRL once on each CPU to clear out any
previous-environment settings, but those boot paths need to latch
last_spec_ctrl too for this to work correctly.
Making this safe is slightly nasty.  I think the best option would be to
reorder this patch to be after the patch 6, and tweak the wording in
patch 6's commit message.  That way, we're not adding latching to
later-dropped codepaths.
~Andrew
 
 
  | 
  
![]()  | 
            
         Lists.xenproject.org is hosted with RackSpace, monitoring our  |