[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv2 3/5] evtchn: use a per-event channel lock for sending events



>>> On 16.06.15 at 17:19, <david.vrabel@xxxxxxxxxx> wrote:
> On 16/06/15 10:18, Jan Beulich wrote:
>>>>> On 15.06.15 at 17:48, <david.vrabel@xxxxxxxxxx> wrote:
>>> @@ -1163,11 +1213,15 @@ int alloc_unbound_xen_event_channel(
>>>      if ( rc )
>>>          goto out;
>>>  
>>> +    spin_lock(&chn->lock);
>>> +
>>>      chn->state = ECS_UNBOUND;
>>>      chn->xen_consumer = get_xen_consumer(notification_fn);
>>>      chn->notify_vcpu_id = lvcpu;
>>>      chn->u.unbound.remote_domid = remote_domid;
>>>  
>>> +    spin_unlock(&chn->lock);
>>> +
>>>   out:
>>>      spin_unlock(&ld->event_lock);
>> 
>> I don't see why you shouldn't be able to move up this unlock.
> 
> Because we need to (also) hold ld->event_lock while changing the state
> from ECS_FREE or a concurrent get_free_port() will find this port again.

I buy this one (and moving the unlock up after the state adjustment
is unlikely to be worth it), but ...

>>> @@ -1221,6 +1277,8 @@ void notify_via_xen_event_channel(struct domain *ld, 
>>> int lport)
>>>          evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
>>>      }
>>>  
>>> +    spin_unlock(&lchn->lock);
>>> +
>>>      spin_unlock(&ld->event_lock);
>>>  }
>> 
>> Again I think the event lock can be dropped earlier now.
> 
> Ditto.

... there's no state change involved here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.