[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 2/9] xen: Optimize introspection access to guest state



On 10/07/14 12:57, Razvan Cojocaru wrote:
> On 07/10/2014 11:17 AM, Andrew Cooper wrote:
>> On 10/07/2014 09:05, Razvan Cojocaru wrote:
>>> On 07/02/2014 06:31 PM, Andrew Cooper wrote:
>>>> On 02/07/14 14:33, Razvan Cojocaru wrote:
>>>>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>>>>> index 2caa04a..fed21b6 100644
>>>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>>>> @@ -425,6 +425,7 @@ static void vmx_vmcs_save(struct vcpu *v, struct 
>>>>> hvm_hw_cpu *c)
>>>>>      c->cr4 = v->arch.hvm_vcpu.guest_cr[4];
>>>>>  
>>>>>      c->msr_efer = v->arch.hvm_vcpu.guest_efer;
>>>>> +    c->guest_x86_mode = vmx_guest_x86_mode(v);
>>>> guest_x86_mode is a linear function of cr0, eflags and efer.  It can be
>>>> calculated by userspace doesn't need to transmitted individually.
>>> OK, but 1) I'm not sending eflags into userspace,
>> rflags is in the structure between r15 and dr7.
>>
>>>  and 2) I thought Xen's
>>> vmx_guest_x86_mode() function is more trustworthy
>> It is not a matter of trust.  It is a matter of correct or not, and it
>> would be easy for userspace to simply copy what vmx_guest_x86_mode()
>> already has.
> Actually, the point I was trying to make is that I find it safer to use
> vmx_guest_x86_mode() in the HV because otherwise I need to duplicate
> that code in userspace (which I'm currently trying to do), and if for
> some reason the implementation changes, someone needs to change it in
> the userspace code as well. Having it only in one place in the HV looked
> like a good idea.
>
> As for it being a function of cr0, eflags and efer, it would appear that
> it is also a function of cs_arbytes:
>
> static int vmx_guest_x86_mode(struct vcpu *v)
> {
>     unsigned long cs_ar_bytes;
>
>     if ( unlikely(!(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE)) )
>         return 0;
>     if ( unlikely(guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) )
>         return 1;
>     __vmread(GUEST_CS_AR_BYTES, &cs_ar_bytes);
>     if ( hvm_long_mode_enabled(v) &&
>          likely(cs_ar_bytes & X86_SEG_AR_CS_LM_ACTIVE) )
>         return 8;
>     return (likely(cs_ar_bytes & X86_SEG_AR_DEF_OP_SIZE) ? 4 : 2);
> }
>
> However, in hvm.c, hvm_save_cpu_ctxt():
>
> hvm_get_segment_register(v, x86_seg_cs, &seg);
> ctxt.cs_sel = seg.sel;
> ctxt.cs_limit = seg.limit;
> ctxt.cs_base = seg.base;
> ctxt.cs_arbytes = seg.attr.bytes;
>
> Looking further at vmx_get_segment_register() in vmx.c, we get this:
>
>  766     case x86_seg_cs:
>  767         __vmread(GUEST_CS_SELECTOR, &sel);
>  768         __vmread(GUEST_CS_LIMIT,    &limit);
>  769         __vmread(GUEST_CS_BASE,     &reg->base);
>  770         __vmread(GUEST_CS_AR_BYTES, &attr);
>  771         break;
>
> then:
>
>  832     reg->attr.bytes = (attr & 0xff) | ((attr >> 4) & 0xf00);
>
> This is why my userspace version of vmx_guest_x86_mode() (which uses
> hwCpu.cs_arbytes from a struct hvm_hw_cpu hwCpu filled by
> xc_domain_hvm_getcontext_partial()) does not work properly (it always
> ends up returning 2, for both 32-bit guests - where it should return 4,
> and 64-bit guests - where it should return 8).
>
> So this solution would appear to be a bit more involved than the initial
> solution. But you're, of course, right that guest_x86_mode should not be
> VMX-specific.
>
> Would it be OK if I would replace the call to vmx_guest_x86_mode() to a
> call to hvm_funcs.guest_x86_mode(v) (assuming that's possible)?

That would still turn a Xen internal into a part of the ABI, which
should be avoided.

set.attr.bytes is our architectural representation of segment selector
state, so you should follow the same method as hvm_hw_cpu.   This means
that you should find the LMA bit in bit 9 of the available cs_arbytes.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.