[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 10/17] PVH xen: introduce vmx_pvh.c and pvh.c



>>> On 02.05.13 at 03:17, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> Ok, I redid it. Created a new function read_descriptor_sel() and rewrote 
> vmx_pvh_read_descriptor(). Please lmk if looks ok to you.  thanks a lot :
> 
> 
> static int read_descriptor_sel(unsigned int sel,
>                                enum sel_type which_sel,
>                                const struct vcpu *v,
>                                const struct cpu_user_regs *regs,
>                                unsigned long *base,
>                                unsigned long *limit,
>                                unsigned int *ar,
>                                unsigned int vm86attr)
> {
>     if ( is_pvh_vcpu(v) )
>         return hvm_read_descriptor(which_sel, v, regs, base, limit, ar);

Why not insert this into read_descriptor(), rather than creating a
new wrapper?

> --- a/xen/include/asm-x86/desc.h
> +++ b/xen/include/asm-x86/desc.h
> @@ -199,6 +199,8 @@ DECLARE_PER_CPU(struct desc_struct *, compat_gdt_table);
>  extern void set_intr_gate(unsigned int irq, void * addr);
>  extern void load_TR(void);
>  
> +enum sel_type { SEL_NONE, SEL_CS, SEL_SS, SEL_DS, SEL_ES, SEL_GS, SEL_FS };

I'd prefer if you re-used enum x86_segment instead of introducing
another enumeration.

> New version of int vmx_pvh_read_descriptor:
> 
> int vmx_pvh_read_descriptor(enum sel_type which_sel, const struct vcpu *v,
>                             const struct cpu_user_regs *regs,
>                             unsigned long *base, unsigned long *limit,
>                             unsigned int *ar)
> {
>     unsigned int tmp_ar = 0;
>     ASSERT(v == current);
>     ASSERT(is_pvh_vcpu(v));
> 
>     switch ( which_sel )
>     {
>     case SEL_CS:
>         tmp_ar = __vmread(GUEST_CS_AR_BYTES);
>         if ( tmp_ar & X86_SEG_AR_CS_LM_ACTIVE )
>         {
>             *base = 0UL;
>             *limit = ~0UL;
>         }
>         else
>         {
>             *base = __vmread(GUEST_CS_BASE);
>             *limit = __vmread(GUEST_CS_LIMIT);
>         }
>         break;
> 
>     case SEL_DS:
>         *base = __vmread(GUEST_DS_BASE);
>         *limit = __vmread(GUEST_DS_LIMIT);

This (as well as SS and ES handling) needs to be consistent with
CS handling - either you rely on the VMCS fields to be correct
even for long mode, or you override base and limit based upon
the _CS_ access rights having the L bit set.

>         tmp_ar = __vmread(GUEST_DS_AR_BYTES);
>         break;
> 
>     case SEL_SS:
>         *base = __vmread(GUEST_SS_BASE);
>         *limit = __vmread(GUEST_SS_LIMIT);
>         tmp_ar = __vmread(GUEST_SS_AR_BYTES);
>         break;
> 
>     case SEL_GS:
>         *base = __vmread(GUEST_GS_BASE);
>         *limit = __vmread(GUEST_GS_LIMIT);
>         tmp_ar = __vmread(GUEST_GS_AR_BYTES);
>         break;
> 
>     case SEL_FS:
>         *base = __vmread(GUEST_FS_BASE);
>         *limit = __vmread(GUEST_FS_LIMIT);
>         tmp_ar = __vmread(GUEST_FS_AR_BYTES);
>         break;
> 
>     case SEL_ES:
>         *base = __vmread(GUEST_ES_BASE);
>         *limit = __vmread(GUEST_ES_LIMIT);
>         tmp_ar = __vmread(GUEST_ES_AR_BYTES);
>         break;

While secondary, I'm also a bit puzzled about the non-natural and
non-logical ordering (CS, DS, SS, GS, FS, ES)...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.