[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 9/9] x86/vmx: Don't leak EFER.NXE into guest context
On 12/06/18 09:54, Jan Beulich wrote: >>>> On 08.06.18 at 20:48, <andrew.cooper3@xxxxxxxxxx> wrote: >> @@ -1646,22 +1637,71 @@ static void vmx_update_guest_cr(struct vcpu *v, >> unsigned int cr, >> >> static void vmx_update_guest_efer(struct vcpu *v) >> { >> - unsigned long vm_entry_value; >> + unsigned long entry_ctls, guest_efer = v->arch.hvm_vcpu.guest_efer, >> + xen_efer = read_efer(); >> + >> + if ( paging_mode_shadow(v->domain) ) >> + { >> + /* >> + * When using shadow pagetables, EFER.NX is a Xen-owned bit and is >> not >> + * under guest control. >> + */ >> + guest_efer &= ~EFER_NX; >> + guest_efer |= xen_efer & EFER_NX; >> + } >> + >> + if ( !(v->arch.hvm_vmx.secondary_exec_control & >> + SECONDARY_EXEC_UNRESTRICTED_GUEST) ) > !vmx_unrestricted_guest(v) > >> + { >> + /* >> + * When Unrestricted Guest is not enabled in the VMCS, hardware does >> + * not tolerate the LME and LMA settings being different. As writes >> + * to CR0 are intercepted, it is safe to leave LME clear at this >> + * point, and fix up both LME and LMA when CR0.PG is set. >> + */ >> + if ( !(guest_efer & EFER_LMA) ) >> + guest_efer &= ~EFER_LME; >> + } >> >> vmx_vmcs_enter(v); >> >> - __vmread(VM_ENTRY_CONTROLS, &vm_entry_value); >> - if ( v->arch.hvm_vcpu.guest_efer & EFER_LMA ) >> - vm_entry_value |= VM_ENTRY_IA32E_MODE; >> + /* >> + * The intended guest running mode is derived from VM_ENTRY_IA32E_MODE, >> + * which (architecturally) is the guest's LMA setting. >> + */ >> + __vmread(VM_ENTRY_CONTROLS, &entry_ctls); >> + >> + entry_ctls &= ~VM_ENTRY_IA32E_MODE; >> + if ( guest_efer & EFER_LMA ) >> + entry_ctls |= VM_ENTRY_IA32E_MODE; >> + >> + __vmwrite(VM_ENTRY_CONTROLS, entry_ctls); >> + >> + /* We expect to use EFER loading in the common case, but... */ >> + if ( likely(cpu_has_vmx_efer) ) >> + __vmwrite(GUEST_EFER, guest_efer); >> + >> + /* ... on Gen1 VT-x hardware, we have to use MSR load/save lists >> instead. */ >> else >> - vm_entry_value &= ~VM_ENTRY_IA32E_MODE; >> - __vmwrite(VM_ENTRY_CONTROLS, vm_entry_value); >> + { >> + /* >> + * When the guests choice of EFER matches Xen's, remove the >> load/save >> + * list entries. It is unnecessary overhead, especially as this is >> + * expected to be the common case for 64bit guests. >> + */ >> + if ( guest_efer == xen_efer ) >> + { >> + vmx_del_msr(v, MSR_EFER, VMX_MSR_HOST); >> + vmx_del_msr(v, MSR_EFER, VMX_MSR_GUEST_LOADONLY); >> + } >> + else >> + { >> + vmx_add_msr(v, MSR_EFER, xen_efer, VMX_MSR_HOST); >> + vmx_add_msr(v, MSR_EFER, guest_efer, VMX_MSR_GUEST_LOADONLY); >> + } >> + } >> >> vmx_vmcs_exit(v); >> - >> - if ( v == current ) >> - write_efer((read_efer() & ~EFER_SCE) | >> - (v->arch.hvm_vcpu.guest_efer & EFER_SCE)); >> } > As mentioned before, overall this would allow for disabling read intercepts in > certain cases. If you don't want to do this right away that's certainly fine, > but > could I talk you into at least adding a comment to this effect? Apologies - that was a straight oversight. Razvan thinks the monitor side of things is actually fine, which was my concern with doing it originally. I've inserted the following fragment in the tail of this function, after the vmx_vmcs_exit(v); /* * If the guests virtualised view of MSR_EFER matches the value loaded * into hardware, clear the read intercept to avoid unnecessary VMExits. */ if ( guest_efer == v->arch.hvm_vcpu.guest_efer ) vmx_clear_msr_intercept(v, MSR_EFER, VMX_MSR_R); else vmx_set_msr_intercept(v, MSR_EFER, VMX_MSR_R); and will quickly whip up an XTF test for some confirmation. > >> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h >> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h >> @@ -311,6 +311,8 @@ extern u64 vmx_ept_vpid_cap; >> (vmx_cpu_based_exec_control & CPU_BASED_MONITOR_TRAP_FLAG) >> #define cpu_has_vmx_pat \ >> (vmx_vmentry_control & VM_ENTRY_LOAD_GUEST_PAT) >> +#define cpu_has_vmx_efer \ >> + (vmx_vmentry_control & VM_ENTRY_LOAD_GUEST_EFER) > I think this was asked before, but I'm concerned (of at least the > inconsistency) > anyway: cpu_has_vmx_mpx, for example, checks both flags. Of course there's > unlikely to be any hardware with just one of the two features, but what about > buggy virtual environments we might run in? I'm not worried about buggy virtual environments. For one, its not really our bug to care about, but irrespective, if an environment is this buggy, it won't notice the setting we've made, and the vmentry will be fine. This, FYI, is exactly what happens with the Virtual NMI feature when nested under Xen atm. Some hypervisors fail to check for it, and blindly use it, and they mostly function when nested under Xen. The hypervisor which check for it as a prerequisite fail to start. > IOW - if you want to check just one of the two flags here, I think you want to > enforce the dependency in vmx_init_vmcs_config(), clearing the entry control > bit if the exit control one comes out clear from adjust_vmx_controls(). As I said before, a work along this line is coming as part of the Nested Virt work. The current logic here is already inconsistent, and is fine in this case. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |