[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 04/10] xen/blkfront: split per device io_lock



On 11/04/2015 04:09 AM, Konrad Rzeszutek Wilk wrote:
> On Mon, Nov 02, 2015 at 12:21:40PM +0800, Bob Liu wrote:
>> The per device io_lock became a coarser grained lock after multi-queues/rings
>> was introduced, this patch introduced a fine-grained ring_lock for each ring.
> 
> s/was introduced/was introduced (see commit titled XYZ)/
> 
> s/introdued/introduces/
>>
>> The old io_lock was renamed to dev_lock and only protect the ->grants list
> 
> s/was/is/
> s/protect/protects/
> 
>> which is shared by all rings.
>>
>> Signed-off-by: Bob Liu <bob.liu@xxxxxxxxxx>
>> ---
>>  drivers/block/xen-blkfront.c | 57 
>> ++++++++++++++++++++++++++------------------
>>  1 file changed, 34 insertions(+), 23 deletions(-)
>>
>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>> index eab78e7..8cc5995 100644
>> --- a/drivers/block/xen-blkfront.c
>> +++ b/drivers/block/xen-blkfront.c
>> @@ -121,6 +121,7 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of 
>> pages to be used for the
>>   */
>>  struct blkfront_ring_info {
>>      struct blkif_front_ring ring;
> 
> Can you add a comment explaining the lock semantic? As in under what 
> conditions
> should it be taken? Like you have it below.
> 
>> +    spinlock_t ring_lock;
>>      unsigned int ring_ref[XENBUS_MAX_RING_PAGES];
>>      unsigned int evtchn, irq;
>>      struct work_struct work;
>> @@ -138,7 +139,8 @@ struct blkfront_ring_info {
>>   */
>>  struct blkfront_info
>>  {
>> -    spinlock_t io_lock;
>> +    /* Lock to proect info->grants list shared by multi rings */
> 
> s/proect/protect/
> 
> Missing full stop.
> 
>> +    spinlock_t dev_lock;
> 
> Shouldn't it be right next to what it is protecting?
> 
> That is right below (or above): 'struct list_head grants;'?
> 
>>      struct mutex mutex;
>>      struct xenbus_device *xbdev;
>>      struct gendisk *gd;
>> @@ -224,6 +226,7 @@ static int fill_grant_buffer(struct blkfront_ring_info 
>> *rinfo, int num)
>>      struct grant *gnt_list_entry, *n;
>>      int i = 0;
>>  
>> +    spin_lock_irq(&info->dev_lock);
> 
> Why there? Why not where you add it to the list?
>>      while(i < num) {
>>              gnt_list_entry = kzalloc(sizeof(struct grant), GFP_NOIO);
>>              if (!gnt_list_entry)
>> @@ -242,6 +245,7 @@ static int fill_grant_buffer(struct blkfront_ring_info 
>> *rinfo, int num)
>>              list_add(&gnt_list_entry->node, &info->grants);
> 
> Right here that is?
> 
> You are holding the lock for the duration of 'kzalloc' and 'alloc_page'.
> 
> And more interestingly, GFP_NOIO translates to __GFP_WAIT which means
> it can call 'schedule'. - And you have taken an spinlock. That should
> have thrown lots of warnings?
> 
>>              i++;
>>      }
>> +    spin_unlock_irq(&info->dev_lock);
>>  
>>      return 0;
>>  
>> @@ -254,6 +258,7 @@ out_of_memory:
>>              kfree(gnt_list_entry);
>>              i--;
>>      }
>> +    spin_unlock_irq(&info->dev_lock);
> 
> Just do it around the 'list_del' operation. You are using an
> 'safe'
>>      BUG_ON(i != 0);
>>      return -ENOMEM;
>>  }
>> @@ -265,6 +270,7 @@ static struct grant *get_grant(grant_ref_t *gref_head,
>>      struct grant *gnt_list_entry;
>>      unsigned long buffer_gfn;
>>  
>> +    spin_lock(&info->dev_lock);
>>      BUG_ON(list_empty(&info->grants));
>>      gnt_list_entry = list_first_entry(&info->grants, struct grant,
>>                                        node);
>> @@ -272,8 +278,10 @@ static struct grant *get_grant(grant_ref_t *gref_head,
>>  
>>      if (gnt_list_entry->gref != GRANT_INVALID_REF) {
>>              info->persistent_gnts_c--;
>> +            spin_unlock(&info->dev_lock);
>>              return gnt_list_entry;
>>      }
>> +    spin_unlock(&info->dev_lock);
> 
> Just have one spin_unlock. Put it right before the 'if 
> (gnt_list_entry->gref)..'.

That's used to protect info->persistent_gnts_c, will update all other place.

Thanks,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.