[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hvm/viridian: fix the TLB flush hypercall



> -----Original Message-----
> From: Paul Durrant [mailto:paul.durrant@xxxxxxxxxx]
> Sent: 16 March 2016 12:57
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Paul Durrant
> Subject: [PATCH] x86/hvm/viridian: fix the TLB flush hypercall
> 
> Commit b38d426a "flush remote tlbs by hypercall" add support to allow
> Windows to request flush of remote TLB via hypercall rather than IPI.
> Unfortunately it seems that this code was broken in a couple of ways:
> 
> 1) The allocation of the per-vcpu flush mask is gated on whether the
>    domain has viridian features enabled but the call to allocate is
>    made before the toolstack has enabled those features. This results
>    in a NULL pointer dereference.
> 
> 2) One of the flush hypercall variants is a rep op, but the code
>    does not update the output data with the reps completed. Hence the
>    guest will spin repeatedly making the hypercall because it believes
>    it has uncompleted reps.
> 
> This patch fixes both of these issues and also adds a check to make
> sure the current vCPU is not included in the flush mask (since there's
> clearly no need for the CPU to IPI itself).
> 
> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

Sorry, forgot to add cc to maintainers. I'll re-send.

  Paul

> ---
>  xen/arch/x86/hvm/hvm.c      | 12 ++++--------
>  xen/arch/x86/hvm/viridian.c |  4 +++-
>  2 files changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 5bc2812..f5c55e1 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2576,12 +2576,9 @@ int hvm_vcpu_initialise(struct vcpu *v)
>      if ( rc != 0 )
>          goto fail6;
> 
> -    if ( is_viridian_domain(d) )
> -    {
> -        rc = viridian_vcpu_init(v);
> -        if ( rc != 0 )
> -            goto fail7;
> -    }
> +    rc = viridian_vcpu_init(v);
> +    if ( rc != 0 )
> +        goto fail7;
> 
>      if ( v->vcpu_id == 0 )
>      {
> @@ -2615,8 +2612,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
> 
>  void hvm_vcpu_destroy(struct vcpu *v)
>  {
> -    if ( is_viridian_domain(v->domain) )
> -        viridian_vcpu_deinit(v);
> +    viridian_vcpu_deinit(v);
> 
>      hvm_all_ioreq_servers_remove_vcpu(v->domain, v);
> 
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 6bd844b..6530a67 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -645,7 +645,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>                  continue;
> 
>              hvm_asid_flush_vcpu(v);
> -            if ( v->is_running )
> +            if ( v != curr && v->is_running )
>                  __cpumask_set_cpu(v->processor, pcpu_mask);
>          }
> 
> @@ -658,6 +658,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>          if ( !cpumask_empty(pcpu_mask) )
>              flush_tlb_mask(pcpu_mask);
> 
> +        output.rep_complete = input.rep_count;
> +
>          status = HV_STATUS_SUCCESS;
>          break;
>      }
> --
> 2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.