[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv2 3/5] evtchn: use a per-event channel lock for sending events



On 16/06/15 10:51, Jan Beulich wrote:
>>>> On 16.06.15 at 11:34, <david.vrabel@xxxxxxxxxx> wrote:
>> On 16/06/15 10:18, Jan Beulich wrote:
>>>>>> On 15.06.15 at 17:48, <david.vrabel@xxxxxxxxxx> wrote:
>>>> @@ -609,21 +662,18 @@ int evtchn_send(struct domain *ld, unsigned int 
>>>> lport)
>>>>      struct domain *rd;
>>>>      int            rport, ret = 0;
>>>>  
>>>> -    spin_lock(&ld->event_lock);
>>>> -
>>>> -    if ( unlikely(!port_is_valid(ld, lport)) )
>>>> -    {
>>>> -        spin_unlock(&ld->event_lock);
>>>> +    if ( unlikely(lport >= read_atomic(&ld->valid_evtchns)) )
>>>>          return -EINVAL;
>>>> -    }
>>>
>>> I don't think you really want to open code part of port_is_valid()
>>> (and avoid other parts of it) here? Or if really so, I think a comment
>>> should be added to explain it.
>>
>> The ld->valid_evtchns is the only field we can safely check without
>> ld->event_lock.
>>
>> We do check the channel state and the code that set this state uses the
>> full port_is_valid() call.  I'll add a comment.
> 
> Hmm, port_is_valid() also checks d->max_evtchns and d->evtchn.
> The latter is involved in evtchn_from_port(), so I can't see how
> you checking the channel's state _afterwards_ can leverage that
> whoever set this state did a full check.
> 
> Another question is whether with the ->valid_evtchns check the
> ->evtchn check is necessary at all anymore. (The check against
> ->max_evtchns isn't wrong with the lock not held, i.e. could only
> end up being too strict, and hence the open coding would then
> still be questionable.)

Ok.  I'll remove the d->evtchn check from port_is_valid() and use it.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.