[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] switch rangeset's lock to rwlock



On 03/25/2011 01:52 PM, Keir Fraser wrote:
> On 25/03/2011 17:52, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:
>
>> Tmem (in Xen) does use rwlocks.  After hearing from Jeremy that
>> Linux maintainers wouldn't approve of new users of rwlocks, I
>> redid the locking structure in the in-Linux-kernel version of tmem
>> to avoid using them.  I am fairly sure that the same approach used in
>> zcache can be used in Xen, but have not tried, and it's likely
>> to be a fairly big coding/testing effort that I can't undertake
>> right now.
>>
>> I am also fairly sure that the current Xen tmem locking structure
>> is not suitable for switching to normal spinlocks nor RCU,
>> but am far from an expert in this area.
> Why would a normal spinlock not work?

Should work; any rwlock use that can't be mechanically replaced with
plain spinlocks is buggy anyway.

    J

>  -- Keir
>
>> Dan
>>
>>> -----Original Message-----
>>> From: Keir Fraser [mailto:keir.xen@xxxxxxxxx]
>>> Sent: Friday, March 25, 2011 11:09 AM
>>> To: Jan Beulich; xen-devel@xxxxxxxxxxxxxxxxxxx
>>> Subject: Re: [Xen-devel] [PATCH] switch rangeset's lock to rwlock
>>>
>>> I'd rather get rid of rwlocks altogether and use RCU in any cases where
>>> we
>>> really have contention. Rwlocks don't help unless the read-side
>>> critical
>>> sections are large enough to amortise the cache ping-pong cost of the
>>> locking/unlocking operations. And in Xen we have very few if any
>>> significantly sized critical sections.
>>>
>>> I need to double check, but I believe we have only a couple of rwlock
>>> users
>>> now, and none of the read-side critical sections are large, so in that
>>> case
>>> I suggest we switch them to use spinlocks and kill our rwlock
>>> implementation.
>>>
>>>  -- Keir
>>>
>>> On 25/03/2011 16:49, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
>>>
>>>> As a general library routine, it should behave as efficiently as
>>>> possible, even if at present no significant contention is known here.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
>>>>
>>>> --- a/xen/common/rangeset.c
>>>> +++ b/xen/common/rangeset.c
>>>> @@ -25,7 +25,7 @@ struct rangeset {
>>>>
>>>>      /* Ordered list of ranges contained in this set, and protecting
>>> lock. */
>>>>      struct list_head range_list;
>>>> -    spinlock_t       lock;
>>>> +    rwlock_t         lock;
>>>>
>>>>      /* Pretty-printing name. */
>>>>      char             name[32];
>>>> @@ -103,7 +103,7 @@ int rangeset_add_range(
>>>>
>>>>      ASSERT(s <= e);
>>>>
>>>> -    spin_lock(&r->lock);
>>>> +    write_lock(&r->lock);
>>>>
>>>>      x = find_range(r, s);
>>>>      y = find_range(r, e);
>>>> @@ -159,7 +159,7 @@ int rangeset_add_range(
>>>>      }
>>>>
>>>>   out:
>>>> -    spin_unlock(&r->lock);
>>>> +    write_unlock(&r->lock);
>>>>      return rc;
>>>>  }
>>>>
>>>> @@ -175,7 +175,7 @@ int rangeset_remove_range(
>>>>
>>>>      ASSERT(s <= e);
>>>>
>>>> -    spin_lock(&r->lock);
>>>> +    write_lock(&r->lock);
>>>>
>>>>      x = find_range(r, s);
>>>>      y = find_range(r, e);
>>>> @@ -231,7 +231,7 @@ int rangeset_remove_range(
>>>>      }
>>>>
>>>>   out:
>>>> -    spin_unlock(&r->lock);
>>>> +    write_unlock(&r->lock);
>>>>      return rc;
>>>>  }
>>>>
>>>> @@ -243,10 +243,10 @@ int rangeset_contains_range(
>>>>
>>>>      ASSERT(s <= e);
>>>>
>>>> -    spin_lock(&r->lock);
>>>> +    read_lock(&r->lock);
>>>>      x = find_range(r, s);
>>>>      contains = (x && (x->e >= e));
>>>> -    spin_unlock(&r->lock);
>>>> +    read_unlock(&r->lock);
>>>>
>>>>      return contains;
>>>>  }
>>>> @@ -259,10 +259,10 @@ int rangeset_overlaps_range(
>>>>
>>>>      ASSERT(s <= e);
>>>>
>>>> -    spin_lock(&r->lock);
>>>> +    read_lock(&r->lock);
>>>>      x = find_range(r, e);
>>>>      overlaps = (x && (s <= x->e));
>>>> -    spin_unlock(&r->lock);
>>>> +    read_unlock(&r->lock);
>>>>
>>>>      return overlaps;
>>>>  }
>>>> @@ -274,13 +274,13 @@ int rangeset_report_ranges(
>>>>      struct range *x;
>>>>      int rc = 0;
>>>>
>>>> -    spin_lock(&r->lock);
>>>> +    read_lock(&r->lock);
>>>>
>>>>      for ( x = find_range(r, s); x && (x->s <= e) && !rc; x =
>>> next_range(r, x)
>>>> )
>>>>          if ( x->e >= s )
>>>>              rc = cb(max(x->s, s), min(x->e, e), ctxt);
>>>>
>>>> -    spin_unlock(&r->lock);
>>>> +    read_unlock(&r->lock);
>>>>
>>>>      return rc;
>>>>  }
>>>> @@ -318,7 +318,7 @@ struct rangeset *rangeset_new(
>>>>      if ( r == NULL )
>>>>          return NULL;
>>>>
>>>> -    spin_lock_init(&r->lock);
>>>> +    rwlock_init(&r->lock);
>>>>      INIT_LIST_HEAD(&r->range_list);
>>>>
>>>>      BUG_ON(flags & ~RANGESETF_prettyprint_hex);
>>>> @@ -403,7 +403,7 @@ void rangeset_printk(
>>>>      int nr_printed = 0;
>>>>      struct range *x;
>>>>
>>>> -    spin_lock(&r->lock);
>>>> +    read_lock(&r->lock);
>>>>
>>>>      printk("%-10s {", r->name);
>>>>
>>>> @@ -422,7 +422,7 @@ void rangeset_printk(
>>>>
>>>>      printk(" }");
>>>>
>>>> -    spin_unlock(&r->lock);
>>>> +    read_unlock(&r->lock);
>>>>  }
>>>>
>>>>  void rangeset_domain_printk(
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.