[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 10/16]: PVH xen: introduce vmx_pvh.c



At 16:05 -0800 on 19 Feb (1361289934), Mukesh Rathor wrote:
> On Thu, 24 Jan 2013 16:31:22 +0000
> Tim Deegan <tim@xxxxxxx> wrote:
> 
> > At 18:01 -0800 on 11 Jan (1357927270), Mukesh Rathor wrote:
> > > +
> > > +        case EXIT_REASON_CPUID:              /* 10 */
> > > +        {
> > > +            if ( guest_kernel_mode(vp, regs) ) {
> > > +                pv_cpuid(regs);
> > > +
> > > +                /* Because we are setting CR4.OSFXSR to 0, we need
> > > to disable
> > > +                 * this because, during boot, user process
> > > "init" (which doesn't
> > > +                 * do cpuid), will do 'pxor xmm0,xmm0' and cause
> > > #UD. For now 
> > > +                 * disable this. HVM doesn't allow setting of
> > > CR4.OSFXSR.
> > > +                 * fixme: this and also look at CR4.OSXSAVE */
> > > +
> > > +                __clear_bit(X86_FEATURE_FXSR, &regs->edx);
> > 
> > Shouldn't this be gated on which leaf the guest asked for?
> 
> Yup, looking at it. X86_FEATURE_FXSR is EAX==1, but it doesn't work. 
> The user process "init" running nash is executing pxor %xmm0, %xmm0 and
> taking #UD. Strangely, it works with EAX==0, meaning if I clear the bit
> for EAX==0, changing the intel string "ineI".  This user process doesn't
> do cpuid, so it must be affected by it some other way.
> 
> Pretty hard to debug since it's in nash user code from ramdisk and I am 
> not able to set breakpoint or put printf's easily to figure why clearing
> bit for EAX==0 makes it work, or what's going on for PV and HVM guest.
> CR0.EM is 0, so UD is coming from CR4.OSFXSR==0. Reading the SDMs to
> learn OSFXSR stuff better....

Perhaps you need to clear the FXSR feature bit in leaf 0x80000001:EDX as
well?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.