[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 10/17] PVH xen: introduce vmx_pvh.c and pvh.c



On Thu, 02 May 2013 07:53:18 +0100
"Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> >>> On 02.05.13 at 03:17, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
> >>> wrote:
> > Ok, I redid it. Created a new function read_descriptor_sel() and
> > rewrote vmx_pvh_read_descriptor(). Please lmk if looks ok to you.
> > thanks a lot :
> > 
> > 
> > static int read_descriptor_sel(unsigned int sel,
> >                                enum sel_type which_sel,
> >                                const struct vcpu *v,
> >                                const struct cpu_user_regs *regs,
> >                                unsigned long *base,
> >                                unsigned long *limit,
> >                                unsigned int *ar,
> >                                unsigned int vm86attr)
> > {
> >     if ( is_pvh_vcpu(v) )
> >         return hvm_read_descriptor(which_sel, v, regs, base, limit,
> > ar);
> 
> Why not insert this into read_descriptor(), rather than creating a
> new wrapper?

There are other callers of read_descriptor() which would need to be 
unnecessaraly changed, we need PVH support for only one caller. So this
seemed the least intrusive.

> > --- a/xen/include/asm-x86/desc.h
> > +++ b/xen/include/asm-x86/desc.h
> > @@ -199,6 +199,8 @@ DECLARE_PER_CPU(struct desc_struct *,
> > compat_gdt_table); extern void set_intr_gate(unsigned int irq, void
> > * addr); extern void load_TR(void);
> >  
> > +enum sel_type { SEL_NONE, SEL_CS, SEL_SS, SEL_DS, SEL_ES, SEL_GS,
> > SEL_FS };
> 
> I'd prefer if you re-used enum x86_segment instead of introducing
> another enumeration.

Of course. I looked for en existing, but didn't look hard enough :).
 
> This (as well as SS and ES handling) needs to be consistent with
> CS handling - either you rely on the VMCS fields to be correct
> even for long mode, or you override base and limit based upon
> the _CS_ access rights having the L bit set.

Right.

> While secondary, I'm also a bit puzzled about the non-natural and
> non-logical ordering (CS, DS, SS, GS, FS, ES)...

Not sure what the natural ordering is, so I sorted according to
the enum x86_segment:

int vmx_pvh_read_descriptor(enum x86_segment selector, const struct vcpu *v,
                            const struct cpu_user_regs *regs,
                            unsigned long *base, unsigned long *limit,
                            unsigned int *ar)
{
    unsigned int tmp_ar = 0;
    ASSERT(v == current);
    ASSERT(is_pvh_vcpu(v));

    switch ( selector )
    {
    case x86_seg_cs:
        *base = __vmread(GUEST_CS_BASE);
        *limit = __vmread(GUEST_CS_LIMIT);
        tmp_ar = __vmread(GUEST_CS_AR_BYTES);
        break;

    case x86_seg_ss:
        *base = __vmread(GUEST_SS_BASE);
        *limit = __vmread(GUEST_SS_LIMIT);
        tmp_ar = __vmread(GUEST_SS_AR_BYTES);
        break;

    case x86_seg_ds:
        *base = __vmread(GUEST_DS_BASE);
        *limit = __vmread(GUEST_DS_LIMIT);
        tmp_ar = __vmread(GUEST_DS_AR_BYTES);
        break;

    case x86_seg_es:
        *base = __vmread(GUEST_ES_BASE);
        *limit = __vmread(GUEST_ES_LIMIT);
        tmp_ar = __vmread(GUEST_ES_AR_BYTES);
        break;

    case x86_seg_fs:
        *base = __vmread(GUEST_FS_BASE);
        *limit = __vmread(GUEST_FS_LIMIT);
        tmp_ar = __vmread(GUEST_FS_AR_BYTES);
        break;

    case x86_seg_gs:
        *base = __vmread(GUEST_GS_BASE);
        *limit = __vmread(GUEST_GS_LIMIT);
        tmp_ar = __vmread(GUEST_GS_AR_BYTES);
        break;

    default:
        gdprintk(XENLOG_WARNING, "Unmatched segment selector:%d\n", selector);
        return 0;
    }

    if ( (tmp_ar & X86_SEG_AR_CS_LM_ACTIVE) && selector < x86_seg_fs  )
    {
        *base = 0UL;
        *limit = ~0UL;
    }

    /* Fix ar so that it looks the same as in native mode */
    *ar = (tmp_ar << 8);

    return 1;
}

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.