[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Avoid alloc for xsave before xsave_init


  • To: "Wei, Gang" <gang.wei@xxxxxxxxx>, "Huang2, Wei" <Wei.Huang2@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Fri, 14 Jan 2011 10:48:32 +0000
  • Cc:
  • Delivery-date: Fri, 14 Jan 2011 02:49:09 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=xYkHK1QOVMIgvl0D/0KJ+CZmmNUcLy/ARIyqkxx7RyEIIghWHntj6cD1BzPIxWBeU6 cqQQB0Jasd1ItxupCEFrWpI5MJ3Z2cxCIyGFxqgbaZToBjPWp5Cqhl1YO7dB6jL/mFRp sXe/d6sTOtf6RjaYCvpc8BRVoDhWthzUSaXaI=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcuzUojLeS2z6tQSQCO88b+hMX2tAQAC+pjAAAIkzIwAEIzTQAAD8xugAAfj98g=
  • Thread-topic: Avoid alloc for xsave before xsave_init

On 14/01/2011 07:11, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:

>> I agree that do test-and-exit on is_idle_vcpu() in
>> Xsave_{alloc,free}_save_area.
>> Further, We'd better add assert(xsave_cntxt_size>=576) after the
>> test-and-exit clause to ensure no buffer overflow will happen in the future.
>> 
>> I reviewed the context switch code and assure context switch code not
>> be doing XSAVE when switching out an idle vcpu.
> 
> Here is the patch.

I applied your patch, plus some cleanup, as of c/s 22751.

 -- Keir

> diff -r d839631b6048 xen/arch/x86/i387.c
> --- a/xen/arch/x86/i387.c Thu Jan 13 00:18:35 2011 +0000
> +++ b/xen/arch/x86/i387.c Sat Jan 15 20:15:24 2011 +0800
> @@ -35,6 +35,9 @@ void save_init_fpu(struct vcpu *v)
>  
>      if ( cpu_has_xsave )
>      {
> +        if ( is_idle_vcpu(v) )
> +            goto out;
> +
>          /* XCR0 normally represents what guest OS set. In case of Xen itself,
>           * we set all accumulated feature mask before doing save/restore.
>           */
> @@ -87,6 +90,7 @@ void save_init_fpu(struct vcpu *v)
>          asm volatile ( "fnsave %0 ; fwait" : "=m" (*fpu_ctxt) );
>      }
>  
> +out:
>      v->fpu_dirtied = 0;
>      write_cr0(cr0|X86_CR0_TS);
>  }
> @@ -138,6 +142,7 @@ void restore_fpu(struct vcpu *v)
>  }
>  
>  #define XSTATE_CPUID 0xd
> +#define XSAVE_AREA_MIN_SIZE (512 + 64)
>  
>  /*
>   * Maximum size (in byte) of the XSAVE/XRSTOR save area required by all
> @@ -177,7 +182,7 @@ void xsave_init(void)
>      }
>  
>      /* FP/SSE, XSAVE.HEADER, YMM */
> -    min_size =  512 + 64 + ((eax & XSTATE_YMM) ? XSTATE_YMM_SIZE : 0);
> +    min_size =  XSAVE_AREA_MIN_SIZE + ((eax & XSTATE_YMM) ? XSTATE_YMM_SIZE :
> 0);
>      BUG_ON(ecx < min_size);
>  
>      /*
> @@ -214,8 +219,10 @@ int xsave_alloc_save_area(struct vcpu *v
>  {
>      void *save_area;
>  
> -    if ( !cpu_has_xsave )
> +    if ( !cpu_has_xsave || is_idle_vcpu(v) )
>          return 0;
> +
> +    BUG_ON(xsave_cntxt_size < XSAVE_AREA_MIN_SIZE);
>  
>      /* XSAVE/XRSTOR requires the save area be 64-byte-boundary aligned. */
>      save_area = _xmalloc(xsave_cntxt_size, 64);
> @@ -235,6 +242,9 @@ int xsave_alloc_save_area(struct vcpu *v
>  
>  void xsave_free_save_area(struct vcpu *v)
>  {
> +    if ( is_idle_vcpu(v) )
> +        return;
> +
>      xfree(v->arch.xsave_area);
>      v->arch.xsave_area = NULL;
>  }
> diff -r d839631b6048 xen/include/asm-x86/i387.h
> --- a/xen/include/asm-x86/i387.h Thu Jan 13 00:18:35 2011 +0000
> +++ b/xen/include/asm-x86/i387.h Sat Jan 15 20:08:14 2011 +0800
> @@ -134,6 +134,9 @@ static inline void setup_fpu(struct vcpu
>          v->fpu_dirtied = 1;
>          if ( cpu_has_xsave )
>          {
> +            if ( is_idle_vcpu(v) )
> +                return;
> +
>              if ( !v->fpu_initialised )
>                  v->fpu_initialised = 1;
> 
> Jimmy



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.