[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 2/8] x86/vmx: Remove lazy FPU support


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 19 Mar 2026 16:54:34 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cg3nR+JfglP2T3BDwMVhrzl+wVrOb2M7z4MovozjnmI=; b=lUjezPcayjJ+OK+Ou6oapJQ8fZds8hiA5uUyiDL8MwmAtpdFkStglnFtlF8UZZHwgDS6jaFGjQPCYM/GDa3qALzdztQuH/Sjqp5oWKYH51/3uQnE3YUmvxuDW0YgTTOnQfjTiKJM+7qJ+1/SLCURjajaDiFeP5iLSIKgxpwgp5uux6EigzJsQSKcJ7WOfGkLs7gCxxFUefRecHXA5TUwBszmD5BwbiJpVEGM7xMRnNx2TyeSboHR5CweWwUfO6mp6XxM2fGjn+X05f08ZS3O/yzMRNRiLQSgh6oOeAL7bb+DbU/Dk9jCPa9WCCpV0w3mtC4W3a/+j+BBZxNLdKEf+Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=FOvg710XFhca8baBaodCdaZg8JaWSG2fW5EwYzOHpAnIbWCxtoxg5CsXYqKUbAM4ycFkTCeHLwubIesjPLSm7x96u6h6Dyyz9bR84DE4J2oNRrtT/MSzeWERYOJqLoTzSDntzwT3HqFYQQd3FOVnVYXZ038b8ZTXff9Cfr9YtDGoLVxEP/pUSTjhOvAhWDoAqghgOYe24pX5HiKNpdUhfCxGA8zxTKzjrWVswFLjqRQB/yak+eoXY8CuiHhSrm3idVeW+xJoTnMtLVpQabOTNQRwgqN0+spHx5z3B0IlpZZIlbIleLXh1fmpiVgZGNrlJDnW1UajnagwZd8iU5FLeQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Ross Lagerwall <ross.lagerwall@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 19 Mar 2026 16:54:50 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 19/03/2026 4:43 pm, Jan Beulich wrote:
> On 19.03.2026 17:38, Andrew Cooper wrote:
>> On 19/03/2026 1:29 pm, Ross Lagerwall wrote:
>>> Remove lazy FPU support from the VMX code since fully_eager_fpu is now
>>> always true.
>>>
>>> No functional change intended.
>>>
>>> Signed-off-by: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>
>>> ---
>>>  xen/arch/x86/hvm/vmx/vmcs.c             |  8 +--
>>>  xen/arch/x86/hvm/vmx/vmx.c              | 70 +------------------------
>>>  xen/arch/x86/hvm/vmx/vvmx.c             | 15 +-----
>>>  xen/arch/x86/include/asm/hvm/vmx/vmcs.h |  2 -
>>>  4 files changed, 5 insertions(+), 90 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
>>> index c2e7f9aed39f..8e52ef4d497a 100644
>>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>>> @@ -1247,10 +1247,7 @@ static int construct_vmcs(struct vcpu *v)
>>>      __vmwrite(HOST_TR_SELECTOR, TSS_SELECTOR);
>>>  
>>>      /* Host control registers. */
>>> -    v->arch.hvm.vmx.host_cr0 = read_cr0() & ~X86_CR0_TS;
>>> -    if ( !v->arch.fully_eager_fpu )
>>> -        v->arch.hvm.vmx.host_cr0 |= X86_CR0_TS;
>>> -    __vmwrite(HOST_CR0, v->arch.hvm.vmx.host_cr0);
>>> +    __vmwrite(HOST_CR0, read_cr0());
>> (Not for this patch) but I'm pretty sure there's room to optimise this
>> further.
>>
>> CR0 should be constant, both here and in SVM.  Reading the active cr0 is
>> an example of the anti-pattern we need to purge to make nested-virt work
>> better.
> In which case, is it a good idea to purge the host_cr0 field?

Oh hmm, I take back my R-by slightly.  We still need to initialise
v->arch.hvm.vmx.host_cr0 for this patch to be no functional change. 
Easy enough to fix, or fix on commit.

That said, I think we probably do want to purge host_cr0 eventually.

There are a few cases where host_cr0 != guest_cr0.  CPUs prior to
unrestricted_guest are the obvious case, where we can't run with CR0.PE
!= 1, but this should always be derivable from guest_cr0 and the
hardware capabilities.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.