[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/vmx: correct the SMEP logic for HVM_CR0_GUEST_RESERVED_BITS



On 28/04/14 03:44, Feng Wu wrote:
> When checking the SMEP feature for HVM guests, we should check the
> VCPU instead of the host CPU.
>
> Signed-off-by: Feng Wu <feng.wu@xxxxxxxxx>

Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

> ---
>  xen/include/asm-x86/hvm/hvm.h | 15 ++++++++++++++-
>  1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index dcc3483..99bfc4c 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -351,6 +351,19 @@ static inline int hvm_event_pending(struct vcpu *v)
>      return hvm_funcs.event_pending(v);
>  }
>  
> +static inline bool_t hvm_vcpu_has_smep(void)
> +{
> +    unsigned int eax, ebx;
> +
> +    hvm_cpuid(0x0, &eax, NULL, NULL, NULL);
> +
> +    if (eax < 0x7)
> +        return 0;
> +
> +    hvm_cpuid(0x7, NULL, &ebx, NULL, NULL);
> +    return !!(ebx & cpufeat_mask(X86_FEATURE_SMEP));
> +}
> +
>  /* These reserved bits in lower 32 remain 0 after any load of CR0 */
>  #define HVM_CR0_GUEST_RESERVED_BITS             \
>      (~((unsigned long)                          \
> @@ -370,7 +383,7 @@ static inline int hvm_event_pending(struct vcpu *v)
>          X86_CR4_DE  | X86_CR4_PSE | X86_CR4_PAE |       \
>          X86_CR4_MCE | X86_CR4_PGE | X86_CR4_PCE |       \
>          X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT |           \
> -        (cpu_has_smep ? X86_CR4_SMEP : 0) |             \
> +        (hvm_vcpu_has_smep() ? X86_CR4_SMEP : 0) |      \
>          (cpu_has_fsgsbase ? X86_CR4_FSGSBASE : 0) |     \
>          ((nestedhvm_enabled((_v)->domain) && cpu_has_vmx)\
>                        ? X86_CR4_VMXE : 0)  |             \


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.