[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/upcall: inject a spurious event after setting upcall vector
>>> On 28.12.17 at 13:57, <roger.pau@xxxxxxxxxx> wrote: > In case the vCPU has pending events to inject. This fixes a bug that > happened if the guest mapped the vcpu info area using > VCPUOP_register_vcpu_info without having setup the event channel > upcall, and then setup the upcall vector. > > In this scenario the guest would not receive any upcalls, because the > call to VCPUOP_register_vcpu_info would have marked the vCPU as having > pending events, but the vector could not be injected because it was > not yet setup. > > This has not caused issues so far because all the consumers first > setup the vector callback and then map the vcpu info page, but there's > no limitation that prevents doing it in the inverse order. Hmm, yes, okay, I can see that we may indeed want to do this for symmetry reasons. There is a small theoretical risk of this causing races, though, for not entirely well written guest drivers. > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -4069,6 +4069,7 @@ static int hvmop_set_evtchn_upcall_vector( > printk(XENLOG_G_INFO "%pv: upcall vector %02x\n", v, op.vector); > > v->arch.hvm_vcpu.evtchn_upcall_vector = op.vector; > + arch_evtchn_inject(v); Why go through the arch hook instead of calling hvm_assert_evtchn_irq() directly? > --- a/xen/arch/x86/hvm/irq.c > +++ b/xen/arch/x86/hvm/irq.c > @@ -385,6 +385,7 @@ void hvm_set_callback_via(struct domain *d, uint64_t via) > struct hvm_irq *hvm_irq = hvm_domain_irq(d); > unsigned int gsi=0, pdev=0, pintx=0; > uint8_t via_type; > + struct vcpu *v; > > via_type = (uint8_t)MASK_EXTR(via, HVM_PARAM_CALLBACK_IRQ_TYPE_MASK) + 1; > if ( ((via_type == HVMIRQ_callback_gsi) && (via == 0)) || > @@ -447,6 +448,9 @@ void hvm_set_callback_via(struct domain *d, uint64_t via) > > spin_unlock(&d->arch.hvm_domain.irq_lock); > > + for_each_vcpu(d, v) > + arch_evtchn_inject(v); Wouldn't it make sense to limit this to actually active vCPU-s? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |