[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v10 09/11] x86/ctxt: Issue a speculation barrier between vcpu contexts
Issuing an IBPB command flushes the Branch Target Buffer, so that any poison left by one vcpu won't remain when beginning to execute the next. The cost of IBPB is substantial, and skipped on transition to idle, as Xen's idle code is robust already. All transitions into vcpu context are fully serialising in practice (and under consideration for being retroactively declared architecturally serialising), so a cunning attacker cannot use SP1 to try and skip the flush. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- CC: Jan Beulich <JBeulich@xxxxxxxx> CC: David Woodhouse <dwmw@xxxxxxxxxxxx> v7: * Use the opt_ibpb boolean rather than using a cpufeature flag. v9: * Expand the commit message. * Optimise the idle case, based on a suggestion from David. v10: * More detailed comments, and an explicit idle check. --- docs/misc/xen-command-line.markdown | 5 ++++- xen/arch/x86/domain.c | 29 +++++++++++++++++++++++++++++ xen/arch/x86/spec_ctrl.c | 10 +++++++++- xen/include/asm-x86/spec_ctrl.h | 1 + 4 files changed, 43 insertions(+), 2 deletions(-) diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown index 11399ce..9c10d3a 100644 --- a/docs/misc/xen-command-line.markdown +++ b/docs/misc/xen-command-line.markdown @@ -246,7 +246,7 @@ enough. Setting this to a high value may cause boot failure, particularly if the NMI watchdog is also enabled. ### bti (x86) -> `= List of [ thunk=retpoline|lfence|jmp, ibrs=<bool>, rsb_{vmexit,native}=<bool> ]` +> `= List of [ thunk=retpoline|lfence|jmp, ibrs=<bool>, ibpb=<bool>, rsb_{vmexit,native}=<bool> ]` Branch Target Injection controls. By default, Xen will pick the most appropriate BTI mitigations based on compiled in support, loaded microcode, @@ -265,6 +265,9 @@ On hardware supporting IBRS, the `ibrs=` option can be used to force or prevent Xen using the feature itself. If Xen is not using IBRS itself, functionality is still set up so IBRS can be virtualised for guests. +On hardware supporting IBPB, the `ibpb=` option can be used to prevent Xen +from issuing Branch Prediction Barriers on vcpu context switches. + The `rsb_vmexit=` and `rsb_native=` options can be used to fine tune when the RSB gets overwritten. There are individual controls for an entry from HVM context, and an entry from a native (PV or Xen) context. diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 643628c..12f527b 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -65,6 +65,7 @@ #include <asm/psr.h> #include <asm/pv/domain.h> #include <asm/pv/mm.h> +#include <asm/spec_ctrl.h> DEFINE_PER_CPU(struct vcpu *, curr_vcpu); @@ -1743,6 +1744,34 @@ void context_switch(struct vcpu *prev, struct vcpu *next) } ctxt_switch_levelling(next); + + if ( opt_ibpb && !is_idle_domain(nextd) ) + { + static DEFINE_PER_CPU(unsigned int, last); + unsigned int *last_id = &this_cpu(last); + + /* + * Squash the domid and vcpu id together for comparason + * efficiency. We could in principle stash and compare the struct + * vcpu pointer, but this risks a false alias if a domain has died + * and the same 4k page gets reused for a new vcpu. + */ + unsigned int next_id = (((unsigned int)nextd->domain_id << 16) | + (uint16_t)next->vcpu_id); + BUILD_BUG_ON(MAX_VIRT_CPUS > 0xffff); + + /* + * When scheduling from a vcpu, to idle, and back to the same vcpu + * (which might be common in a lightly loaded system, or when + * using vcpu pinning), there is no need to issue IBPB, as we are + * returning to the same security context. + */ + if ( *last_id != next_id ) + { + wrmsrl(MSR_PRED_CMD, PRED_CMD_IBPB); + *last_id = next_id; + } + } } context_saved(prev); diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c index de5ba1a..3baad8a 100644 --- a/xen/arch/x86/spec_ctrl.c +++ b/xen/arch/x86/spec_ctrl.c @@ -36,6 +36,7 @@ static enum ind_thunk { static int8_t __initdata opt_ibrs = -1; static bool __initdata opt_rsb_native = true; static bool __initdata opt_rsb_vmexit = true; +bool __read_mostly opt_ibpb = true; uint8_t __read_mostly default_bti_ist_info; static int __init parse_bti(const char *s) @@ -63,6 +64,8 @@ static int __init parse_bti(const char *s) } else if ( (val = parse_boolean("ibrs", s, ss)) >= 0 ) opt_ibrs = val; + else if ( (val = parse_boolean("ibpb", s, ss)) >= 0 ) + opt_ibpb = val; else if ( (val = parse_boolean("rsb_native", s, ss)) >= 0 ) opt_rsb_native = val; else if ( (val = parse_boolean("rsb_vmexit", s, ss)) >= 0 ) @@ -103,13 +106,14 @@ static void __init print_details(enum ind_thunk thunk) printk(XENLOG_DEBUG " Compiled-in support: INDIRECT_THUNK\n"); printk(XENLOG_INFO - "BTI mitigations: Thunk %s, Others:%s%s%s\n", + "BTI mitigations: Thunk %s, Others:%s%s%s%s\n", thunk == THUNK_NONE ? "N/A" : thunk == THUNK_RETPOLINE ? "RETPOLINE" : thunk == THUNK_LFENCE ? "LFENCE" : thunk == THUNK_JMP ? "JMP" : "?", boot_cpu_has(X86_FEATURE_XEN_IBRS_SET) ? " IBRS+" : boot_cpu_has(X86_FEATURE_XEN_IBRS_CLEAR) ? " IBRS-" : "", + opt_ibpb ? " IBPB" : "", boot_cpu_has(X86_FEATURE_RSB_NATIVE) ? " RSB_NATIVE" : "", boot_cpu_has(X86_FEATURE_RSB_VMEXIT) ? " RSB_VMEXIT" : ""); } @@ -278,6 +282,10 @@ void __init init_speculation_mitigations(void) if ( opt_rsb_vmexit ) setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + /* Check we have hardware IBPB support before using it... */ + if ( !boot_cpu_has(X86_FEATURE_IBRSB) && !boot_cpu_has(X86_FEATURE_IBPB) ) + opt_ibpb = false; + /* (Re)init BSP state how that default_bti_ist_info has been calculated. */ init_shadow_spec_ctrl_state(); diff --git a/xen/include/asm-x86/spec_ctrl.h b/xen/include/asm-x86/spec_ctrl.h index 6120e4f..e328b0f 100644 --- a/xen/include/asm-x86/spec_ctrl.h +++ b/xen/include/asm-x86/spec_ctrl.h @@ -24,6 +24,7 @@ void init_speculation_mitigations(void); +extern bool opt_ibpb; extern uint8_t default_bti_ist_info; static inline void init_shadow_spec_ctrl_state(void) -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |