[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 10/17] PVH xen: introduce vmx_pvh.c and pvh.c



>>> On 04.05.13 at 03:40, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> On Fri, 03 May 2013 07:33:50 +0100 "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> >>> On 03.05.13 at 02:40, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >>> wrote:
>> > On Thu, 02 May 2013 07:53:18 +0100
>> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> > 
>> >     if ( (tmp_ar & X86_SEG_AR_CS_LM_ACTIVE) && selector <
>> > x86_seg_fs  )
>> 
>> This is still wrong. As said before you need to look as the _CS_
>> access rights, not the ones of the selector register you read.
> 
> Hmm... unless I'm reading the SDM wrong, it says "for non-code segments
> bit 21 is reserved and should always be set to 0". But its prob clearer
> to check for _CS_ only. 

I'm afraid you're still not understanding what I'm trying to explain:
Whether base and limit are ignored (and default to 0/~0) depends
on whether the guest executes in 64-bit mode, and this you can
know only by looking at CS.L, no matter what selector register
you're reading.

Maybe part of the confusion stems from you mixing two things
here - reading of a descriptor from a descriptor table (which is
what read_descriptor() does, as that's all you can do for PV
guests) vs reading of the hidden portions of a selector register
(which is what hvm_get_segment_register() does, thanks to
VMX/SVM providing access).

>> But as also hinted at - do you really need the override at all?
> 
> Yes, because of the following check in insn_fetch macro:
> 
>      "(eip) > (limit) - (sizeof(_x) - 1)" in the if statment:
> 
>    if ( (limit) < sizeof(_x) - 1 || (eip) > (limit) - (sizeof(_x) - 1) )   \
>        goto fail;                                                          \
> 
> Reading vmcs would return 32bit limit of 0xffffffff.

That's unfortunate of course - then the override indeed is
unavoidable.

> BTW, same override
> exists in read_descriptor() (it seems to do the override for FS and
> GS also, which I don't understand).

See above - this function just can't access the hidden portion of
the selector registers, and hence doesn't even care what register
a particular selector might be in. The caller has to take care to
ignore base and limit when the guest is in 64-bit mode.

> Anyways, thanks to hvm_get_segment_register(), I got rid of the function 
> vmx_pvh_read_descriptor():
> 
> static int read_descriptor_sel(unsigned int sel,
>                                enum x86_segment which_sel,
>                                struct vcpu *v,
>                                const struct cpu_user_regs *regs,
>                                unsigned long *base,
>                                unsigned long *limit,
>                                unsigned int *ar,
>                                unsigned int vm86attr)
> {
>     if ( is_pvh_vcpu(v) )
>     {
>         struct segment_register seg;
> 
>         hvm_get_segment_register(v, which_sel, &seg);
>         *ar = (unsigned int)seg.attr.bytes;
> 
>         /* ar is returned packed as in segment_attributes_t. fix it up */
>         *ar = (*ar & 0xff ) | ((*ar & 0xf00) << 4);
>         *ar = *ar << 8;
> 
>         if ( (vm86attr & _SEGMENT_CODE) && (*ar & _SEGMENT_L) &&

So as per above this is still wrong.

>              (which_sel < x86_seg_fs) )
>         {
>             *base = 0UL;
>             *limit = ~0UL;
>         }
>         else
>         {
>             *base = (unsigned long)seg.base;
>             *limit = (unsigned long)seg.limit;
>         }

One thing I misguided you slightly is that you will need to
override the limit regardless of selector register; only the base
must not be forced to zero for FS and GS.

Jan

> 
>         return 1;
>     }
> 
>     return read_descriptor(sel, v, regs, base, limit, ar, vm86attr);
> 
> }




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.