|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 3/3] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling
On 13.01.2022 17:38, Andrew Cooper wrote:
> Second, update vmx_msr_{read,write}_intercept() to use the load/save lists
> rather than vcpu_msrs, and update the comment to describe the new state
> location.
Nit: Assuming "the comment" refers to something in the named function,
I'm afraid I can't spot any comment being updated there. Perhaps you
mean the comment in the declaration of struct vcpu_msrs?
> --- a/xen/arch/x86/hvm/vmx/entry.S
> +++ b/xen/arch/x86/hvm/vmx/entry.S
> @@ -35,7 +35,14 @@ ENTRY(vmx_asm_vmexit_handler)
>
> /* SPEC_CTRL_ENTRY_FROM_VMX Req: b=curr %rsp=regs/cpuinfo, Clob:
> acd */
> ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM
> - ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM
> +
> + .macro restore_spec_ctrl
> + mov $MSR_SPEC_CTRL, %ecx
> + movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax
> + xor %edx, %edx
> + wrmsr
> + .endm
> + ALTERNATIVE "", restore_spec_ctrl, X86_FEATURE_SC_MSR_HVM
> /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
>
> /* Hardware clears MSR_DEBUGCTL on VMExit. Reinstate it if
> debugging Xen. */
> @@ -83,7 +90,6 @@ UNLIKELY_END(realmode)
>
> /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
> /* SPEC_CTRL_EXIT_TO_VMX Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob:
> cd */
> - ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM
> ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)),
> X86_FEATURE_SC_VERW_HVM
Shouldn't you update the "Clob:" remarks here as well?
> @@ -119,12 +125,11 @@ UNLIKELY_END(realmode)
> SAVE_ALL
>
> /*
> - * PV variant needed here as no guest code has executed (so
> - * MSR_SPEC_CTRL can't have changed value), and NMIs/MCEs are liable
> - * to hit (in which case the HVM variant might corrupt things).
> + * SPEC_CTRL_ENTRY notes
> + *
> + * If we end up here, no guest code has executed. We still have
> Xen's
> + * choice of MSR_SPEC_CTRL in context, and the RSB is safe.
> */
> - SPEC_CTRL_ENTRY_FROM_PV /* Req: %rsp=regs/cpuinfo Clob: acd */
> - /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
Is "no guest code has executed" actually enough here? If VM entry failed
due to a later MSR-load-list entry, SPEC_CTRL could have changed value
already, couldn't it?
> @@ -601,17 +602,27 @@ static void vmx_cpuid_policy_changed(struct vcpu *v)
>
> vmx_vmcs_enter(v);
> vmx_update_exception_bitmap(v);
> - vmx_vmcs_exit(v);
>
> /*
> * We can safely pass MSR_SPEC_CTRL through to the guest, even if STIBP
> * isn't enumerated in hardware, as SPEC_CTRL_STIBP is ignored.
> */
> if ( cp->feat.ibrsb )
> + {
> vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
> +
> + rc = vmx_add_guest_msr(v, MSR_SPEC_CTRL, 0);
> + if ( rc )
> + goto err;
> + }
> else
> + {
> vmx_set_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
>
> + /* Can only fail with -ESRCH, and we don't care. */
> + vmx_del_msr(v, MSR_SPEC_CTRL, VMX_MSR_GUEST);
To be forward-compatible with possible future changes to the
function (or load list handling as a whole), how about having an
assertion proving what the comment says:
rc = vmx_del_msr(v, MSR_SPEC_CTRL, VMX_MSR_GUEST);
if ( rc )
{
ASSERT(rc == -ESRCH);
rc = 0;
}
?
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |