[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hvm/viridian: fix the TLB flush hypercall



> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: 16 March 2016 13:20
> To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Keir (Xen.org); Jan Beulich
> Subject: Re: [PATCH] x86/hvm/viridian: fix the TLB flush hypercall
> 
> On 16/03/16 13:00, Paul Durrant wrote:
> > Commit b38d426a "flush remote tlbs by hypercall" add support to allow
> > Windows to request flush of remote TLB via hypercall rather than IPI.
> > Unfortunately it seems that this code was broken in a couple of ways:
> >
> > 1) The allocation of the per-vcpu flush mask is gated on whether the
> >    domain has viridian features enabled but the call to allocate is
> >    made before the toolstack has enabled those features. This results
> >    in a NULL pointer dereference.
> >
> > 2) One of the flush hypercall variants is a rep op, but the code
> >    does not update the output data with the reps completed. Hence the
> >    guest will spin repeatedly making the hypercall because it believes
> >    it has uncompleted reps.
> >
> > This patch fixes both of these issues and also adds a check to make
> > sure the current vCPU is not included in the flush mask (since there's
> > clearly no need for the CPU to IPI itself).
> 
> Thinking more about this, the asid flush does serve properly for TLB
> flushing.  Why do we then subsequently use flush_tlb_mask(), as opposed
> to a less heavyweight alternative like smp_send_event_check_mask() ?
> 

Yes, all I need is to force the CPUs out of non-root mode so that will serve 
the purpose. V2 coming up.

  Paul

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.