[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] XSA-60 security hole: flush cache when vmentry back to UC guest



>>> On 30.10.13 at 17:07, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote:
> From 159251a04afcdcd8ca08e9f2bdfae279b2aa5471 Mon Sep 17 00:00:00 2001
> From: Liu Jinsong <jinsong.liu@xxxxxxxxx>
> Date: Thu, 31 Oct 2013 06:38:15 +0800
> Subject: [PATCH 4/4] XSA-60 security hole: flush cache when vmentry back to 
> UC guest
> 
> This patch flush cache when vmentry back to UC guest, to prevent
> cache polluted by hypervisor access guest memory during UC mode.
> 
> The elegant way to do this is, simply add wbinvd just before vmentry.
> However, currently wbinvd before vmentry will mysteriously trigger
> lapic timer interrupt storm, hung booting stage for 10s ~ 60s. We still
> didn't dig out the root cause of interrupt storm, so currently this
> patch add flag indicating hypervisor access UC guest memory to prevent
> interrupt storm problem. Whenever the interrupt storm got root caused
> and fixed, the protection flag can be removed.

Yeah, almost, except that
- the flag should be per-vCPU
- you should mention in the description that this still leaves aspects
  un-addressed (speculative reads at least, and multi-vCPU issues,
  and I'm sure there are more that I didn't think of so far)

Jan

> Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
> Suggested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Signed-off-by: Liu Jinsong <jinsong.liu@xxxxxxxxx>
> ---
>  xen/arch/x86/hvm/hvm.c        |    7 +++++++
>  xen/arch/x86/hvm/vmx/vmx.c    |    7 +++++++
>  xen/include/asm-x86/hvm/hvm.h |    1 +
>  3 files changed, 15 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index df021de..47eb18d 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -68,6 +68,7 @@
>  #include <public/mem_event.h>
>  
>  bool_t __read_mostly hvm_enabled;
> +bool_t __read_mostly hypervisor_access_uc_hvm_memory;
>  
>  unsigned int opt_hvm_debug_level __read_mostly;
>  integer_param("hvm_debug", opt_hvm_debug_level);
> @@ -2483,6 +2484,9 @@ static enum hvm_copy_result __hvm_copy(
>          return HVMCOPY_unhandleable;
>  #endif
>  
> +    if ( unlikely(curr->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
> +        hypervisor_access_uc_hvm_memory = 1;
> +
>      while ( todo > 0 )
>      {
>          count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
> @@ -2596,6 +2600,9 @@ static enum hvm_copy_result __hvm_clear(paddr_t addr, 
> int size)
>          return HVMCOPY_unhandleable;
>  #endif
>  
> +    if ( unlikely(curr->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
> +        hypervisor_access_uc_hvm_memory = 1;
> +
>      while ( todo > 0 )
>      {
>          count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index d846a9c..1cea5a3 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2974,6 +2974,13 @@ void vmx_vmenter_helper(const struct cpu_user_regs 
> *regs)
>      struct hvm_vcpu_asid *p_asid;
>      bool_t need_flush;
>  
> +    /* In case hypervisor accessor hvm memory when guest uc mode */
> +    if ( unlikely(hypervisor_access_uc_hvm_memory) )
> +    {
> +        hypervisor_access_uc_hvm_memory = 0;
> +        wbinvd();
> +    }
> +
>      if ( !cpu_has_vmx_vpid )
>          goto out;
>      if ( nestedhvm_vcpu_in_guestmode(curr) )
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index c9afb56..c7ac6b8 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -197,6 +197,7 @@ struct hvm_function_table {
>  
>  extern struct hvm_function_table hvm_funcs;
>  extern bool_t hvm_enabled;
> +extern bool_t hypervisor_access_uc_hvm_memory;
>  extern bool_t cpu_has_lmsl;
>  extern s8 hvm_port80_allowed;
>  
> -- 
> 1.7.1




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.