[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xc_hvm_inject_trap() races



>>> On 02.11.16 at 10:11, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
> On 11/02/2016 11:05 AM, Jan Beulich wrote:
>>>>> On 02.11.16 at 09:57, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>> On 11/02/2016 10:49 AM, Jan Beulich wrote:
>>>> The fact that {vmx,svm}_inject_trap() combine the new exception
>>>> with an already injected one (and blindly discard events other than
>>>> hw exceptions), otoh, looks like indeed wants to be controllable by
>>>> the caller: When the event comes from the outside (the hypercall),
>>>> it would clearly seem better to simply tell the caller that no injection
>>>> happened and the event needs to be kept pending.
>>>
>>> However this is not possible with the current design, since all
>>> xc_hvm_inject_trap() really does is set the info to be used at
>>> hvm_do_resume() time. So at the time xc_hvm_inject_trap() returns, it's
>>> not yet possible to know if the injection will succeed or not (assuming
>>> we discard it when it would collide with another).
>> 
>> That's my point - it shouldn't get discarded, but remain latched
>> for a future invocation of hvm_do_resume(). Making
>> hvm_inject_trap() have a suitable parameter (and a return value)
>> would be the easy part of the change here. The difficult part would
>> be to make sure the injection context is the right one.
> 
> Should I then bring this patch back?
> 
> https://lists.xen.org/archives/html/xen-devel/2014-07/msg02927.html 
> 
> It has been rejected at the time on the grounds that
> xc_hvm_inject_trap() is sufficient.

I don't think it would deal with all possible situations, the more
that it's (already by its title) #PF specific. I think the named
difficult part would need to be solved in the hypervisor alone,
without further external information.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.