[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 06/16] vmx: nest: handling VMX instruction exits



At 09:15 +0100 on 15 Sep (1284542116), Keir Fraser wrote:
> On 15/09/2010 08:56, "Dong, Eddie" <eddie.dong@xxxxxxxxx> wrote:
> 
> >> that the partial decode from vmexit reason saves you much at all, and
> >> you might as well go the whole hog and do full decode. I don't see
> >> much saving from a hacky middle-ground.
> > 
> > So how about we reuse some functions in x86 emulate like this one?
> 
> Ah, well, now I look at your patch 06/16 properly, I think it's clear and
> self-contained as it is. Your private enumerations within nest.c simply
> serve to document the format of the decoded instruction provided to you via
> fields in the VMCS. I wouldn't be inclined to change it at all, unless Tim
> really has strong objections about it.

No, that's OK.

> It's not like you're defining
> namespaces for new abstractions you have conjured from thin air -- they
> correspond directly to a hardware-defined decode format. Defining
> enumerations on top of that is *good*, imo. I would take 06/16 as it stands.

Fair enough, but I'd like the memory leak fixed too (svmcs and vvmcs are
only freed if the N1 guest executes VMXOFF).

Cheers,

Tim.

> > static enum x86_segment
> > decode_segment(uint8_t modrm_reg)
> > {
> >     switch ( modrm_reg )
> >     {
> >     case 0: return x86_seg_es;
> >     case 1: return x86_seg_cs;
> >     case 2: return x86_seg_ss;
> >     case 3: return x86_seg_ds;
> >     case 4: return x86_seg_fs;
> >     case 5: return x86_seg_gs;
> >     default: break;
> >     }
> >     return decode_segment_failed;
> > }
> > 
> > Thx, Eddie
> 
> 

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, XenServer Engineering
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.