[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/vm_event: Added support for VM_EVENT_REASON_INTERRUPT



On 09/11/16 11:32, Razvan Cojocaru wrote:
> On 11/09/2016 01:17 PM, Jan Beulich wrote:
>>>>> On 09.11.16 at 10:42, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>> Added support for a new event type, VM_EVENT_REASON_INTERRUPT,
>>> which is now fired in a one-shot manner when enabled via the new
>>> VM_EVENT_FLAG_GET_NEXT_INTERRUPT vm_event response flag.
>>> The patch also fixes the behaviour of the xc_hvm_inject_trap()
>>> hypercall, which would lead to non-architectural interrupts
>>> overwriting pending (specifically reinjected) architectural ones.
>> Looks quite okay, just some more or less mechanical comments.
>>
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -532,11 +532,23 @@ void hvm_do_resume(struct vcpu *v)
>>>          }
>>>      }
>>>  
>>> -    /* Inject pending hw/sw trap */
>>> -    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
>>> -    {
>>> +    /* Inject pending hw/sw trap if there are no other pending interrupts. 
>>> */
>>> +    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 && 
>>> !hvm_event_pending(v) )
>>>          hvm_inject_trap(&v->arch.hvm_vcpu.inject_trap);
>>> -        v->arch.hvm_vcpu.inject_trap.vector = -1;
>>> +
>>> +    v->arch.hvm_vcpu.inject_trap.vector = -1;
>> I don't see why you pull this out of the if() body.
> That is intended, and covered by the "the patch also fixes the behaviour
> of the xc_hvm_inject_trap() hypercall, which would lead to
> non-architectural interrupts overwriting pending (specifically
> reinjected) architectural ones" part of the patch description.
>
> If we couldn't inject the trap because there was a pending event (i.e.
> the second if() condition, then not setting
> v->arch.hvm_vcpu.inject_trap.vector to -1 would lead to the trap being
> kept for injection at the first opportunity - and that could be when the
> context has changed and we shouldn't inject it anymore. So
> v->arch.hvm_vcpu.inject_trap.vector is therefore reset either way.
>
>>> +    if ( unlikely(v->arch.vm_event) &&
>>> +    v->arch.vm_event->monitor_next_interrupt )
>> Hard tab.
> I'll fix it.
>
>>> --- a/xen/arch/x86/hvm/monitor.c
>>> +++ b/xen/arch/x86/hvm/monitor.c
>>> @@ -150,6 +150,21 @@ int hvm_monitor_cpuid(unsigned long insn_length, 
>>> unsigned int leaf,
>>>      return monitor_traps(curr, 1, &req);
>>>  }
>>>  
>>> +void hvm_monitor_interrupt(unsigned int vector, unsigned int type,
>>> +                           unsigned int err, uint64_t cr2)
>>> +{
>>> +    struct vcpu *curr = current;
>> Pointless local variable (used just once).
> I'll remove it.
>
>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>> @@ -2220,6 +2220,21 @@ static void svm_invlpg(struct vcpu *v, unsigned long 
>>> vaddr)
>>>      svm_asid_g_invlpg(v, vaddr);
>>>  }
>>>  
>>> +static bool svm_get_pending_event(struct vcpu *v, struct hvm_trap *info)
>>> +{
>>> +    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
>>> +
>>> +    if ( vmcb->eventinj.fields.v )
>>> +        return false;
>>> +
>>> +    info->vector = vmcb->eventinj.fields.vector;
>>> +    info->type = vmcb->eventinj.fields.type;
>>> +    info->error_code = vmcb->eventinj.fields.errorcode;
>>> +    info->cr2 = v->arch.hvm_vcpu.guest_cr[2];
>> I'd prefer for this last part to be put into generic code (i.e. the
>> wrapper).
> You mean setting CR2, which is common, right? I'll move it to the wrapper.
>
>>> --- a/xen/include/asm-arm/vm_event.h
>>> +++ b/xen/include/asm-arm/vm_event.h
>>> @@ -52,4 +52,10 @@ void vm_event_emulate_check(struct vcpu *v, 
>>> vm_event_response_t *rsp)
>>>      /* Not supported on ARM. */
>>>  }
>>>  
>>> +static inline
>>> +void vm_event_monitor_next_interrupt(struct vcpu *v)
>>> +{
>>> +    /* Not supported on ARM. */
>>> +}
>> That's unfortunate. If it can't be implemented, shouldn't the caller at
>> least be advised of this being unavailable? Wasn't there even some
>> mechanism to report capabilities?
> Yes, I forgot to update the capabilities list. Good point, I'll see
> about that as well.
>
>>> --- a/xen/include/asm-x86/hvm/hvm.h
>>> +++ b/xen/include/asm-x86/hvm/hvm.h
>>> @@ -237,6 +237,8 @@ struct hvm_function_table {
>>>          /* Architecture function to setup TSC scaling ratio */
>>>          void (*setup)(struct vcpu *v);
>>>      } tsc_scaling;
>>> +
>>> +    bool (*get_pending_event)(struct vcpu *v, struct hvm_trap *info);
>>>  };
>> Stylistically I think this would better go a little earlier.
> I'll move it after event_pending.
>
>>> --- a/xen/include/asm-x86/vm_event.h
>>> +++ b/xen/include/asm-x86/vm_event.h
>>> @@ -32,6 +32,7 @@ struct arch_vm_event {
>>>          struct vm_event_emul_insn_data insn;
>>>      } emul;
>>>      struct monitor_write_data write_data;
>>> +    bool monitor_next_interrupt;
>>>  };
>> I think there's a 32-bit padding hole before write_data, so the new
>> field would better go earlier (perhaps even right after flags).
> I'll move it there.
>
>>> @@ -139,6 +144,8 @@
>>>   *       These kinds of events will be filtered out in future versions.
>>>   */
>>>  #define VM_EVENT_REASON_PRIVILEGED_CALL         11
>>> +/* Result of toolstack-requested (non-architectural) trap injection. */
>>> +#define VM_EVENT_REASON_INTERRUPT               12
>> Considering the event reports all kinds of interruptions, I don't think
>> the comment is appropriate.
> True, I'll update it.
>
>>> @@ -259,6 +266,13 @@ struct vm_event_cpuid {
>>>      uint32_t _pad;
>>>  };
>>>  
>>> +struct vm_event_interrupt {
>>> +    uint32_t vector;
>>> +    uint32_t type;
>>> +    uint32_t error_code;
>>> +    uint64_t cr2;
>>> +};
>> This being x86-specific, I think it should be named or union-ized
>> accordingly.
> Right, I'll rename it.

You area also exposing  X86_EVENTTYPE_* in the hypervisor ABI.

This is probably fine as it is an ABI inherited from VT-x/SVM (and by
some miracle, are actually compatible), but you do need to move the
constants into the public API as well.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.