[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held



On 04.12.2020 12:28, Julien Grall wrote:
> Hi Jan,
> 
> On 03/12/2020 10:09, Jan Beulich wrote:
>> On 02.12.2020 22:10, Julien Grall wrote:
>>> On 23/11/2020 13:30, Jan Beulich wrote:
>>>> While there don't look to be any problems with this right now, the lock
>>>> order implications from holding the lock can be very difficult to follow
>>>> (and may be easy to violate unknowingly). The present callbacks don't
>>>> (and no such callback should) have any need for the lock to be held.
>>>>
>>>> However, vm_event_disable() frees the structures used by respective
>>>> callbacks and isn't otherwise synchronized with invocations of these
>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
>>>> to wait to drop to zero before freeing the port (and dropping the lock).
>>>
>>> AFAICT, this callback is not the only place where the synchronization is
>>> missing in the VM event code.
>>>
>>> For instance, vm_event_put_request() can also race against
>>> vm_event_disable().
>>>
>>> So shouldn't we handle this issue properly in VM event?
>>
>> I suppose that's a question to the VM event folks rather than me?
> 
> Yes. From my understanding of Tamas's e-mail, they are relying on the 
> monitoring software to do the right thing.
> 
> I will refrain to comment on this approach. However, given the race is 
> much wider than the event channel, I would recommend to not add more 
> code in the event channel to deal with such problem.
> 
> Instead, this should be fixed in the VM event code when someone has time 
> to harden the subsystem.

Are effectively saying I should now undo the addition of the
refcounting, which was added in response to feedback from you?
Or else what exactly am I to take from your reply?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.