[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/hvm: Add per-vcpu evtchn upcalls



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 07 November 2014 11:48
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v2] x86/hvm: Add per-vcpu evtchn upcalls
> 
> >>> On 06.11.14 at 16:33, <paul.durrant@xxxxxxxxxx> wrote:
> > HVM guests have always been confined to using the domain callback
> > via (see HVM_PARAM_CALLBACK_IRQ) to receive event notifications
> > which is an IOAPIC vector and is only used if the event channel is
> > bound to vcpu 0.
> 
> Iirc the callback-via-vector method was specifically added to have a
> way to spread the IRQ handling load. And even if this didn't work out
> as intended, wouldn't simply setting a flag to avoid the restriction in
> 
> 
> > This patch adds a new HVM op allowing a guest to specify a local
> > APIC vector to use as an upcall notification for a specific vcpu.
> > This therefore allows a guest which sets a vector for a vcpu
> > other than 0 to then bind event channels to that vcpu.
> 
> So is there really a need for a per-vCPU vector value (rather than
> a single domain wide one)?
> 

I can't stipulate that Windows gives me the same vector on every CPU, so yes.

> > @@ -220,6 +227,8 @@ void hvm_assert_evtchn_irq(struct vcpu *v)
> >
> >      if ( is_hvm_pv_evtchn_vcpu(v) )
> >          vcpu_kick(v);
> > +    else if ( v->arch.hvm_vcpu.evtchn_upcall_vector != 0 )
> > +        hvm_set_upcall_irq(v);
> 
> The context code above your insertion is clearly not enforcing
> vCPU 0 only; the code below this change is.
> 

Yes, the callback via is only allowed to be issued for events bound to vcpu 0, 
although nothing ensures that it only gets delivered to vcpu0. I don't know 
what the historical reason behind that is. The whole point of the new vectors 
though is that there is one per vcpu, not just one on vcpu 0, so why would I 
want to enforce vcpu 0 only? It would defeat the entire point of the patch.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.