|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V3 3/6] x86/xsaves: enable xsaves/xrstors for hvm guest
On Fri, Aug 07, 2015 at 02:04:51PM +0100, Andrew Cooper wrote:
> On 07/08/15 09:22, Shuai Ruan wrote:
> >
> >>> void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
> >>> unsigned int *ecx, unsigned int *edx)
> >>> {
> >>> @@ -4456,6 +4460,34 @@ void hvm_cpuid(unsigned int input, unsigned int
> >>> *eax, unsigned int *ebx,
> >>> *ebx = _eax + _ebx;
> >>> }
> >>> }
> >>> + if ( count == 1 )
> >>> + {
> >>> + if ( cpu_has_xsaves )
> >>> + {
> >>> + *ebx = XSTATE_AREA_MIN_SIZE;
> >>> + if ( v->arch.xcr0 | v->arch.msr_ia32_xss )
> >>> + for ( sub_leaf = 2; sub_leaf < 63; sub_leaf++ )
> >>> + {
> >>> + if ( !((v->arch.xcr0 | v->arch.msr_ia32_xss)
> >>> + & (1ULL << sub_leaf)) )
> >>> + continue;
> >>> + domain_cpuid(d, input, sub_leaf, &_eax, &_ebx,
> >>> &_ecx,
> >>> + &_edx);
> >>> + *ebx = *ebx + _eax;
> >>> + }
> >>> + }
> >>> + else
> >>> + {
> >>> + *eax &= ~XSAVES;
> >>> + *ebx = *ecx = *edx = 0;
> >>> + }
> >>> + if ( !cpu_has_xgetbv1 )
> >>> + *eax &= ~XGETBV1;
> >>> + if ( !cpu_has_xsavec )
> >>> + *eax &= ~XSAVEC;
> >>> + if ( !cpu_has_xsaveopt )
> >>> + *eax &= ~XSAVEOPT;
> >>> + }
> >> Urgh - I really need to get domain cpuid fixed in Xen. This is
> >> currently making a very bad situation a little worse.
> >>
> > In patch 4, I expose the xsaves/xsavec/xsaveopt and need to check
> > whether the hardware supoort it. What's your suggestion about this?
>
> Calling into domain_cpuid() in the loop is not useful as nothing will
> set the subleaves up. As a first pass, reading from
> xstate_{offsets,sizes} will be better than nothing, as it will at least
What do you mean by xstate_{offsets,sizes}?
> match reality until the domain is migrated.
>
For CPUID(eax=0dh) with subleaf 1, the value of ebx will change
according to the v->arch.xcr0 | v->arch.msr_ia32_xss. So add
code in hvm_cpuid function is the best way I can think of. Your
suggestions :)?
> Longterm, I plan to overhaul the cpuid infrastructure to allow it to
> properly represent per-core and per-package data, as well as move it
> into the Xen architectural migration state, to avoid any host specific
> values leaking into guest state. This however is also a lot of work,
> which you don't want to dependent on.
>
> >
> >>> static int construct_vmcs(struct vcpu *v)
> >>> {
> >>> struct domain *d = v->domain;
> >>> @@ -1204,6 +1206,9 @@ static int construct_vmcs(struct vcpu *v)
> >>> __vmwrite(GUEST_PAT, guest_pat);
> >>> }
> >>>
> >>> + if ( cpu_has_vmx_xsaves )
> >>> + __vmwrite(XSS_EXIT_BITMAP, VMX_XSS_EXIT_BITMAP);
> >>> +
> >>> vmx_vmcs_exit(v);
> >>>
> >>> /* PVH: paging mode is updated by arch_set_info_guest(). */
> >>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> >>> index d3183a8..64ff63b 100644
> >>> --- a/xen/arch/x86/hvm/vmx/vmx.c
> >>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> >>> @@ -2708,6 +2708,16 @@ static int vmx_handle_apic_write(void)
> >>> return vlapic_apicv_write(current, exit_qualification & 0xfff);
> >>> }
> >>>
> >>> +static void vmx_handle_xsaves(void)
> >>> +{
> >>> + WARN();
> >>> +}
> >>> +
> >>> +static void vmx_handle_xrstors(void)
> >>> +{
> >>> + WARN();
> >>> +}
> >>> +
> >> What is these supposed to do? They are not an appropriate handlers.
> >>
> > These two handlers do nothing here. Perform xsaves in HVM guest will
> > not trap in hypersior in this patch (by setting XSS_EXIT_BITMAP zero).
> > However it may trap in the future. See SDM Volume 3 Section 25.1.3
> > for detail information.
>
> in which case use domain_crash(). WARN() here will allow a guest to DoS
> Xen.
I will change this in next version.
>
> ~Andrew
>
Thanks for your review ,Andrew.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |