[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/upcall: inject a spurious event after setting upcall vector

>>> On 04.01.18 at 10:13, <roger.pau@xxxxxxxxxx> wrote:
> On Tue, Jan 02, 2018 at 09:47:40AM -0700, Jan Beulich wrote:
>> >>> On 28.12.17 at 13:57, <roger.pau@xxxxxxxxxx> wrote:
>> > In case the vCPU has pending events to inject. This fixes a bug that
>> > happened if the guest mapped the vcpu info area using
>> > VCPUOP_register_vcpu_info without having setup the event channel
>> > upcall, and then setup the upcall vector.
>> > 
>> > In this scenario the guest would not receive any upcalls, because the
>> > call to VCPUOP_register_vcpu_info would have marked the vCPU as having
>> > pending events, but the vector could not be injected because it was
>> > not yet setup.
>> > 
>> > This has not caused issues so far because all the consumers first
>> > setup the vector callback and then map the vcpu info page, but there's
>> > no limitation that prevents doing it in the inverse order.
>> Hmm, yes, okay, I can see that we may indeed want to do this for
>> symmetry reasons. There is a small theoretical risk of this causing
>> races, though, for not entirely well written guest drivers.
> Not exactly. In the scenario described above not injecting this event
> will cause further events to not be injected also. This is because
> VCPUOP_register_vcpu_info sets evtchn_upcall_pending and tries to
> inject an event using arch_evtchn_inject. If the vector is not set at
> this point, arch_evtchn_inject will do nothing, but
> evtchn_upcall_pending will be left set.
> Further calls to vcpu_mark_events_pending will not call into
> hvm_assert_evtchn_irq because the pending bit is already set, thus
> preventing the delivery of any event channel interrupts.

I understand that, but I don't understand how this relates to my

In any event, an option to at least consider would be to clear the
pending indicator again if the vector was found unset.

>> > @@ -447,6 +448,9 @@ void hvm_set_callback_via(struct domain *d, uint64_t 
>> > via)
>> >  
>> >      spin_unlock(&d->arch.hvm_domain.irq_lock);
>> >  
>> > +    for_each_vcpu(d, v)
>> > +        arch_evtchn_inject(v);
>> Wouldn't it make sense to limit this to actually active vCPU-s?
> Since it's not a hot-path I didn't bother, but I guess using
> is_vcpu_online should be fine?

I think so, yes. The less work in such for-each-vcpu loops, the


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.