[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 07/14] x86/vvmx: restart nested vmentry in case of stale_np2m



On Fri, 2017-09-29 at 11:53 +0100, George Dunlap wrote:
> On 09/04/2017 09:14 AM, Sergey Dyasli wrote:
> > If an IPI flushes vCPU's np2m object just before nested vmentry, there
> > will be a stale shadow EPTP value in VMCS02. Allow vmentry to be
> > restarted in such cases and add nvmx_eptp_update() to perform an update.
> > 
> > Signed-off-by: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx>
> > ---
> >  xen/arch/x86/hvm/vmx/entry.S |  6 ++++++
> >  xen/arch/x86/hvm/vmx/vmx.c   |  8 +++++++-
> >  xen/arch/x86/hvm/vmx/vvmx.c  | 14 ++++++++++++++
> >  3 files changed, 27 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
> > index 53eedc6363..9fb8f89220 100644
> > --- a/xen/arch/x86/hvm/vmx/entry.S
> > +++ b/xen/arch/x86/hvm/vmx/entry.S
> > @@ -79,6 +79,8 @@ UNLIKELY_END(realmode)
> >  
> >          mov  %rsp,%rdi
> >          call vmx_vmenter_helper
> > +        cmp  $0,%eax
> > +        jne .Lvmx_vmentry_restart
> >          mov  VCPU_hvm_guest_cr2(%rbx),%rax
> >  
> >          pop  %r15
> > @@ -117,6 +119,10 @@ ENTRY(vmx_asm_do_vmentry)
> >          GET_CURRENT(bx)
> >          jmp  .Lvmx_do_vmentry
> >  
> > +.Lvmx_vmentry_restart:
> > +        sti
> > +        jmp  .Lvmx_do_vmentry
> > +
> >  .Lvmx_goto_emulator:
> >          sti
> >          mov  %rsp,%rdi
> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> > index f6da119c9f..06509590b7 100644
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -4223,13 +4223,17 @@ static void lbr_fixup(void)
> >          bdw_erratum_bdf14_fixup();
> >  }
> >  
> > -void vmx_vmenter_helper(const struct cpu_user_regs *regs)
> > +int vmx_vmenter_helper(const struct cpu_user_regs *regs)
> >  {
> >      struct vcpu *curr = current;
> >      u32 new_asid, old_asid;
> >      struct hvm_vcpu_asid *p_asid;
> >      bool_t need_flush;
> >  
> > +    /* Shadow EPTP can't be updated here because irqs are disabled */
> > +     if ( nestedhvm_vcpu_in_guestmode(curr) && 
> > vcpu_nestedhvm(curr).stale_np2m )
> > +         return 1;
> > +
> >      if ( curr->domain->arch.hvm_domain.pi_ops.do_resume )
> >          curr->domain->arch.hvm_domain.pi_ops.do_resume(curr);
> >  
> > @@ -4290,6 +4294,8 @@ void vmx_vmenter_helper(const struct cpu_user_regs 
> > *regs)
> >      __vmwrite(GUEST_RIP,    regs->rip);
> >      __vmwrite(GUEST_RSP,    regs->rsp);
> >      __vmwrite(GUEST_RFLAGS, regs->rflags | X86_EFLAGS_MBS);
> > +
> > +    return 0;
> >  }
> >  
> >  /*
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index ea2da14489..26ce349c76 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1405,12 +1405,26 @@ static void virtual_vmexit(struct cpu_user_regs 
> > *regs)
> >      vmsucceed(regs);
> >  }
> >  
> > +static void nvmx_eptp_update(void)
> > +{
> > +    if ( !nestedhvm_vcpu_in_guestmode(current) ||
> > +          vcpu_nestedhvm(current).nv_vmexit_pending ||
> > +         !vcpu_nestedhvm(current).stale_np2m ||
> > +         !nestedhvm_paging_mode_hap(current) )
> > +        return;
> > +
> > +    __vmwrite(EPT_POINTER, get_shadow_eptp(current));
> > +    vcpu_nestedhvm(current).stale_np2m = false;
> 
> Hmm, so interrupts are enabled here.  What happens if a flush IPI occurs
> between these two lines of code?  Won't we do the vmenter with a stale np2m?
> 
> It seems like we should clear stale_np2m first.  If an IPI occurs then,
> we'll end up re-executing the vmenter unnecessarily, but it's better to
> do that than to not re-execute it when we need to.

Good catch! Clearing of stale_np2m must indeed happen before updating
a shadow EPTP.

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.