[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/4] nested vmx: optimize for bulk access of virtual VMCS



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Tuesday, January 22, 2013 9:13 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; Zhang, Xiantao; xen-devel
> Subject: Re: [PATCH v4 3/4] nested vmx: optimize for bulk access of virtual
> VMCS
> 
> >>> On 22.01.13 at 13:00, Dongxiao Xu <dongxiao.xu@xxxxxxxxx> wrote:
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -30,6 +30,7 @@
> >
> >  static void nvmx_purge_vvmcs(struct vcpu *v);
> >
> > +#define VMCS_BUF_SIZE 500
> 
> The biggest batch I can spot is about 60 elements large, so
> why 500?
> 
> > @@ -83,6 +90,9 @@ void nvmx_vcpu_destroy(struct vcpu *v)
> >          list_del(&item->node);
> >          xfree(item);
> >      }
> > +
> > +    if ( nvcpu->vvmcx_buf )
> > +        xfree(nvcpu->vvmcx_buf);
> 
> No need for the if() - xfree() copes quite well with NULL pointers.
> 
> > @@ -830,6 +840,35 @@ static void vvmcs_to_shadow(void *vvmcs,
> unsigned int field)
> >      __vmwrite(field, value);
> >  }
> >
> > +static void vvmcs_to_shadow_bulk(struct vcpu *v, unsigned int n,
> > +                                 const u16 *field)
> > +{
> > +    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> > +    void *vvmcs = nvcpu->nv_vvmcx;
> > +    u64 *value = nvcpu->vvmcx_buf;
> > +    unsigned int i;
> > +
> > +    if ( !cpu_has_vmx_vmcs_shadowing )
> > +        goto fallback;
> > +
> > +    if ( !value || n > VMCS_BUF_SIZE )
> 
> And then, if you lower that value, be verbose (at lest in debugging
> builds) about the buffer size being exceeded.
> 
> > --- a/xen/include/asm-x86/hvm/vcpu.h
> > +++ b/xen/include/asm-x86/hvm/vcpu.h
> > @@ -100,6 +100,8 @@ struct nestedvcpu {
> >       */
> >      bool_t nv_ioport80;
> >      bool_t nv_ioportED;
> > +
> > +    u64 *vvmcx_buf; /* A temp buffer for data exchange */
> 
> VMX-specific field in non-VMX structure? And wouldn't the buffer
> anyway more efficiently be per-pCPU instead of per-vCPU?

Yes, it should be VMX specific.
I also ever thought of putting it per-pCPU, however we need to find somewhere 
to initialize and finalize the pointer.
One possible place is in vmx_cpu_up()/vmx_cpu_down(), but I think it is not 
proper to put the code into the two functions, they are not quite related. 
Therefore I put it per-vcpu structure.
Do you have any hint about it?

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.