|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/hvm: Add per-vcpu evtchn upcalls
On 06/11/14 15:14, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew Cooper
>> Sent: 06 November 2014 15:02
>> To: Paul Durrant; xen-devel@xxxxxxxxxxxxx
>> Cc: Keir (Xen.org); Jan Beulich
>> Subject: Re: [Xen-devel] [PATCH] x86/hvm: Add per-vcpu evtchn upcalls
>>
>> On 06/11/14 14:50, Paul Durrant wrote:
>>> HVM guests have always been confined to using the domain callback
>>> via (see HVM_PARAM_CALLBACK_IRQ) to receive event notifications
>>> which is an IOAPIC vector and is only used if the event channel is
>>> bound to vcpu 0.
>>> This patch adds a new HVM op allowing a guest to specify a local
>>> APIC vector to use as an upcall notification for a specific vcpu.
>>> This therefore allows a guest which sets a vector for a vcpu
>>> other than 0 to then bind event channels to that vcpu.
>>>
>>> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
>>> Cc: Keir Fraser <keir@xxxxxxx>
>>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>> Substantially more minimal changes than I would have guessed!
>>
> Yep :-) most of the change needed is guest-side.
>
>>> ---
>>> xen/arch/x86/hvm/hvm.c | 35
>> +++++++++++++++++++++++++++++++++++
>>> xen/arch/x86/hvm/irq.c | 9 +++++++++
>>> xen/include/asm-x86/hvm/vcpu.h | 1 +
>>> xen/include/public/hvm/hvm_op.h | 16 ++++++++++++++++
>>> 4 files changed, 61 insertions(+)
>>>
>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>> index 78f519d..684e666 100644
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -5458,6 +5458,36 @@ static int hvmop_destroy_ioreq_server(
>>> return rc;
>>> }
>>>
>>> +static int hvmop_set_evtchn_upcall_vector(
>>> + XEN_GUEST_HANDLE_PARAM(xen_hvm_set_evtchn_upcall_vector_t)
>> uop)
>>> +{
>>> + xen_hvm_set_evtchn_upcall_vector_t op;
>>> + struct domain *d;
>>> + struct vcpu *v;
>>> + int rc;
>>> +
>>> + if ( copy_from_guest(&op, uop, 1) )
>>> + return -EFAULT;
>>> +
>>> + d = rcu_lock_current_domain();
>>> +
>>> + rc = -EINVAL;
>>> + if ( !is_hvm_domain(d) )
>>> + goto out;
>>> +
>> ENOENT, to help differentiate the various failures.
>>
> Sure.
>
>>> + if ( op.vcpu >= d->max_vcpus || (v = d->vcpu[op.vcpu]) == NULL )
>>> + goto out;
>>> +
>> Need to verify that op.vector > 0xf. The first 16 vectors are not valid
>> for delivery via the LAPIC.
> Good point. I'll add that check.
>
>>> + printk(XENLOG_G_INFO "%pv: %s %u\n", v, __func__, op.vector);
>>> +
>>> + v->arch.hvm_vcpu.evtchn_upcall_vector = op.vector;
>>> + rc = 0;
>>> +
>>> + out:
>>> + rcu_unlock_domain(d);
>>> + return rc;
>>> +}
>>> +
>>> #define HVMOP_op_mask 0xff
>>>
>>> long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void)
>> arg)
>>> @@ -5499,6 +5529,11 @@ long do_hvm_op(unsigned long op,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>> guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
>>> break;
>>>
>>> + case HVMOP_set_evtchn_upcall_vector:
>>> + rc = hvmop_set_evtchn_upcall_vector(
>>> + guest_handle_cast(arg, xen_hvm_set_evtchn_upcall_vector_t));
>>> + break;
>>> +
>>> case HVMOP_set_param:
>>> case HVMOP_get_param:
>>> {
>>> diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
>>> index 35f4f94..3e4c0b4 100644
>>> --- a/xen/arch/x86/hvm/irq.c
>>> +++ b/xen/arch/x86/hvm/irq.c
>>> @@ -152,6 +152,13 @@ void hvm_isa_irq_deassert(
>>> spin_unlock(&d->arch.hvm_domain.irq_lock);
>>> }
>>>
>>> +static void hvm_set_upcall_irq(struct vcpu *v)
>>> +{
>>> + uint8_t vector = v->arch.hvm_vcpu.evtchn_upcall_vector;
>>> +
>>> + vlapic_set_irq(vcpu_vlapic(v), vector, 0);
>>> +}
>>> +
>>> static void hvm_set_callback_irq_level(struct vcpu *v)
>>> {
>>> struct domain *d = v->domain;
>>> @@ -220,6 +227,8 @@ void hvm_assert_evtchn_irq(struct vcpu *v)
>>>
>>> if ( is_hvm_pv_evtchn_vcpu(v) )
>>> vcpu_kick(v);
>>> + else if ( v->arch.hvm_vcpu.evtchn_upcall_vector != 0 )
>>> + hvm_set_upcall_irq(v);
>>> else if ( v->vcpu_id == 0 )
>>> hvm_set_callback_irq_level(v);
>>> }
>>> diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-
>> x86/hvm/vcpu.h
>>> index 01e0665..edd4523 100644
>>> --- a/xen/include/asm-x86/hvm/vcpu.h
>>> +++ b/xen/include/asm-x86/hvm/vcpu.h
>>> @@ -160,6 +160,7 @@ struct hvm_vcpu {
>>> } u;
>>>
>>> struct tasklet assert_evtchn_irq_tasklet;
>>> + u8 evtchn_upcall_vector;
>>>
>>> struct nestedvcpu nvcpu;
>>>
>>> diff --git a/xen/include/public/hvm/hvm_op.h
>> b/xen/include/public/hvm/hvm_op.h
>>> index eeb0a60..33ccf45 100644
>>> --- a/xen/include/public/hvm/hvm_op.h
>>> +++ b/xen/include/public/hvm/hvm_op.h
>>> @@ -369,6 +369,22 @@
>> DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t);
>>> #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>> This new hvmop looks like it should live in an x86 specific section.
>>
> Hmm. Aren't HVM ops essentially x86 specific anyway? There's certainly
> x86-ness all over the header.
ARM uses some of the HVM ops, but I would agree that most of them are x86.
>
>>> +/*
>>> + * HVMOP_set_evtchn_upcall_vector: Set a <vector> that should be used
>> for event
>>> + * channel upcalls on the specified
>>> <vcpu>. If set,
>>> + * this vector will be used in preference
>>> to the
>>> + * domain callback via (see
>>> HVM_PARAM_CALLBACK_IRQ)
>>> + * and hence allows HVM guests to bind
>>> event
>>> + * event channels to a vcpu other than 0.
>>> + */
>>> +#define HVMOP_set_evtchn_upcall_vector 23
>>> +struct xen_hvm_set_evtchn_upcall_vector {
>>> + uint32_t vcpu;
>>> + uint8_t vector;
>> Is it plausible that a device model might want to call this hypercall on
>> a domain which it controls? I don't believe so, but the question is
>> worth considering with a view to adding a domid parameter before the API
>> is set in stone.
> No, I don't think it's useful outside guest context. I'm open to adding a
> domid if anyone else thinks otherwise though.
>
> Paul
More "double checking that this has at least been considered". I admit
that I can't think of a plausible reason why this hypercall would be
valid to use on anything other than DOMID_SELF.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |