[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/monitor: add support for descriptor access events



Hello,

On Wed, Apr 05, 2017 at 08:26:27AM -0600, Jan Beulich wrote:
> >>> On 04.04.17 at 11:57, <apop@xxxxxxxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -3572,6 +3572,43 @@ gp_fault:
> >      return X86EMUL_EXCEPTION;
> >  }
> >  
> > +int hvm_descriptor_access_intercept(uint64_t exit_info,
> > +                                    uint64_t vmx_exit_qualification,
> > +                                    uint8_t descriptor, bool is_write)
> 
> Why uint8_t?

The descriptor type from struct vm_event_desc_access is uint8_t since
there are only 4 possible descriptors:

> > +#define VM_EVENT_DESC_IDTR           1
> > +#define VM_EVENT_DESC_GDTR           2
> > +#define VM_EVENT_DESC_LDTR           3
> > +#define VM_EVENT_DESC_TR             4

Should it be something else?

> > +{
> > +    struct vcpu *curr = current;
> > +    struct domain *currd = curr->domain;
> > +    int rc;
> > +
> > +    if ( currd->arch.monitor.descriptor_access_enabled )
> > +    {
> > +        ASSERT(curr->arch.vm_event);
> > +        hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
> > +                                      descriptor, is_write);
> > +    }
> > +    else
> > +    {
> > +        struct hvm_emulate_ctxt ctxt = {};
> > +
> > +        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> > +        rc = hvm_emulate_one(&ctxt);
> > +        switch ( rc )
> 
> You don't really need to go through a local variable here.

Ok.
 
> > --- a/xen/arch/x86/hvm/monitor.c
> > +++ b/xen/arch/x86/hvm/monitor.c
> > @@ -72,6 +72,28 @@ void hvm_monitor_msr(unsigned int msr, uint64_t value)
> >      }
> >  }
> >  
> > +void hvm_monitor_descriptor_access(uint64_t exit_info,
> > +                                   uint64_t vmx_exit_qualification,
> > +                                   uint8_t descriptor, bool is_write)
> > +{
> > +    struct vcpu *curr = current;
> > +    vm_event_request_t req = {
> > +        .reason = VM_EVENT_REASON_DESCRIPTOR_ACCESS,
> > +        .u.desc_access.descriptor = descriptor,
> > +        .u.desc_access.is_write = is_write,
> > +    };
> > +    if ( cpu_has_vmx )
> > +    {
> > +        req.u.desc_access.arch.vmx.instr_info = exit_info;
> > +        req.u.desc_access.arch.vmx.exit_qualification = 
> > vmx_exit_qualification;
> > +    }
> > +    else
> > +    {
> > +        req.u.desc_access.arch.svm.exitinfo = exit_info;
> > +    }
> > +    monitor_traps(curr, 1, &req);
> 
> true

Ok.

> > @@ -3361,6 +3376,40 @@ static void vmx_handle_xrstors(void)
> >      domain_crash(current->domain);
> >  }
> >  
> > +static void vmx_handle_idt_or_gdt(idt_or_gdt_instr_info_t instr_info,
> > +                                  uint64_t exit_qualification)
> > +{
> > +    uint8_t descriptor = instr_info.instr_identity
> > +        ? VM_EVENT_DESC_IDTR : VM_EVENT_DESC_GDTR;
> > +
> > +    hvm_descriptor_access_intercept(instr_info.raw, exit_qualification,
> > +                                    descriptor, instr_info.instr_write);
> > +}
> > +
> > +static void vmx_handle_ldt_or_tr(ldt_or_tr_instr_info_t instr_info,
> > +                                 uint64_t exit_qualification)
> > +{
> > +    uint8_t descriptor = instr_info.instr_identity
> > +        ? VM_EVENT_DESC_TR : VM_EVENT_DESC_LDTR;
> > +
> > +    hvm_descriptor_access_intercept(instr_info.raw, exit_qualification,
> > +                                    descriptor, instr_info.instr_write);
> > +}
> 
> I think these should be folded into their only caller (at once
> eliminating the need to make those unions transparent ones).

Ok.

> And again - why uint8_t?

Same as above.

> Jan

Thank you!

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.