[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/HVM: support emulated UMIP
On 29/01/2021 11:45, Jan Beulich wrote: > There are three noteworthy drawbacks: > 1) The intercepts we need to enable here are CPL-independent, i.e. we > now have to emulate certain instructions for ring 0. > 2) On VMX there's no intercept for SMSW, so the emulation isn't really > complete there. > 3) The CR4 write intercept on SVM is lower priority than all exception > checks, so we need to intercept #GP. > Therefore this emulation doesn't get offered to guests by default. > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> I wonder if it would be helpful for this to be 3 patches, simply because of the differing complexity of the VT-x and SVM pieces. > --- a/xen/arch/x86/cpuid.c > +++ b/xen/arch/x86/cpuid.c > @@ -453,6 +453,13 @@ static void __init calculate_hvm_max_pol > __set_bit(X86_FEATURE_X2APIC, hvm_featureset); > > /* > + * Xen can often provide UMIP emulation to HVM guests even if the host > + * doesn't have such functionality. > + */ > + if ( hvm_funcs.set_descriptor_access_exiting ) No need for this check. Exiting is available on all generations and vendors. Also, the header file probably wants a ! annotation for UMIP to signify that we doing something special with it. > + __set_bit(X86_FEATURE_UMIP, hvm_featureset); > + > + /* > * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in > * long mode (and init_amd() has cleared it out of host capabilities), > but > * HVM guests are able if running in protected mode. > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -991,7 +991,8 @@ unsigned long hvm_cr4_guest_valid_bits(c > X86_CR4_PCE | > (p->basic.fxsr ? X86_CR4_OSFXSR : 0) | > (p->basic.sse ? X86_CR4_OSXMMEXCPT : 0) | > - (p->feat.umip ? X86_CR4_UMIP : 0) | > + ((p == &host_cpuid_policy ? &hvm_max_cpuid_policy : p)->feat.umip > + ? X86_CR4_UMIP : 0) | This hunk wants dropping. p can't alias host_cpuid_policy any more. (and for future changes which do look like this, a local bool please, per the comment.) > (vmxe ? X86_CR4_VMXE : 0) | > (p->feat.fsgsbase ? X86_CR4_FSGSBASE : 0) | > (p->basic.pcid ? X86_CR4_PCIDE : 0) | > @@ -3731,6 +3732,13 @@ int hvm_descriptor_access_intercept(uint > struct vcpu *curr = current; > struct domain *currd = curr->domain; > > + if ( (is_write || curr->arch.hvm.guest_cr[4] & X86_CR4_UMIP) && Brackets for & expression? > + hvm_get_cpl(curr) ) > + { > + hvm_inject_hw_exception(TRAP_gp_fault, 0); > + return X86EMUL_OKAY; > + } I believe this is a logical change for monitor - previously, non-ring0 events would go all the way to userspace. I don't expect this to be an issue - monitoring agents really shouldn't be interested in userspace actions the guest kernel is trying to turn into #GP. CC'ing Tamas for his opinion. > + > if ( currd->arch.monitor.descriptor_access_enabled ) > { > ASSERT(curr->arch.vm_event); > --- a/xen/arch/x86/hvm/svm/svm.c > +++ b/xen/arch/x86/hvm/svm/svm.c > @@ -547,6 +547,28 @@ void svm_update_guest_cr(struct vcpu *v, > value &= ~(X86_CR4_SMEP | X86_CR4_SMAP); > } > > + if ( v->domain->arch.cpuid->feat.umip && !cpu_has_umip ) Throughout the series, examples like this should have the !cpu_has_umip clause first. It is static per host, rather than variable per VM, and will improve the branch prediction. Where the logic is equivalent, it is best to have the clauses in stability order, as this will prevent a modern CPU from even evaluating the CPUID policy. > + { > + u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb); > + > + if ( v->arch.hvm.guest_cr[4] & X86_CR4_UMIP ) > + { > + value &= ~X86_CR4_UMIP; > + ASSERT(vmcb_get_cr_intercepts(vmcb) & CR_INTERCEPT_CR0_READ); It occurs to me that adding CR0 read exiting adds a lot of complexity for very little gain. >From a practical standpoint, UMIP exists to block SIDT/SGDT which are the two instructions which confer an attacker with useful information (linear addresses of the IDT/GDT respectively). SLDT/STR only confer a 16 bit index within the GDT (fixed per OS), and SMSW is as good as a constant these days. Given that Intel cannot intercept SMSW at all and we've already accepted that as a limitation vs architectural UMIP, I don't think extra complexity on AMD is worth the gain. > @@ -2728,6 +2767,14 @@ void svm_vmexit_handler(struct cpu_user_ > svm_fpu_dirty_intercept(); > break; > > + case VMEXIT_EXCEPTION_GP: > + HVMTRACE_1D(TRAP, TRAP_gp_fault); > + /* We only care about ring 0 faults with error code zero. */ > + if ( vmcb->exitinfo1 || vmcb_get_cpl(vmcb) || > + !hvm_emulate_one_insn(is_cr4_write, "CR4 write") ) > + hvm_inject_hw_exception(TRAP_gp_fault, vmcb->exitinfo1); I should post one of my pending SVM cleanup patches, which further deconstructs exitinfo into more usefully named fields. The comment should include *why* we only care about this state. It needs to mention emulated UMIP, and the priority order of #GP and VMExit. > --- a/xen/arch/x86/hvm/vmx/vmcs.c > +++ b/xen/arch/x86/hvm/vmx/vmcs.c > @@ -1537,6 +1552,7 @@ static void vmx_update_guest_cr(struct v > (X86_CR4_PSE | X86_CR4_SMEP | > X86_CR4_SMAP) > : 0; > + v->arch.hvm.vmx.cr4_host_mask |= cpu_has_umip ? 0 : X86_CR4_UMIP; if ( !cpu_has_umip ) v->arch.hvm.vmx.cr4_host_mask |= X86_CR4_UMIP; ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |