[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv2 2/3] grant_table: convert grant table rwlock to percpu rwlock



On 25/11/15 13:53, Jan Beulich wrote:
>>>> On 25.11.15 at 14:43, <malcolm.crossley@xxxxxxxxxx> wrote:
>> On 25/11/15 12:35, Jan Beulich wrote:
>>>>>> On 20.11.15 at 17:03, <malcolm.crossley@xxxxxxxxxx> wrote:
>>>> @@ -208,8 +210,6 @@ active_entry_acquire(struct grant_table *t, 
>>>> grant_ref_t 
>> e)
>>>>  {
>>>>      struct active_grant_entry *act;
>>>>  
>>>> -    ASSERT(rw_is_locked(&t->lock));
>>>
>>> Even if not covering all cases, I don't think this should be dropped,
>>> just like you don't drop rw_is_write_locked() asserts. Would properly
>>> replacing this be rather expensive?
>>
>> I could add a percpu_rw_is_locked function but it will be racy because I 
>> can't
>> set the local lock barrier variable ( set/clearing that would race with real
>> users of the write lock ) Therefore I would be left polling the percpu 
>> readers
>> which is not reliable without the local barrier variable. Should I still add
>> the function if it won't reliably detect the locked condition?
> 
> Iiuc that would then risk the ASSERT() producing false positives,
> which clearly is a no-go. In which case I'd still like you to leave in
> some sort of comment (perhaps just the current code converted
> to a comment) to document the requirement.

After checking the grant table code itself, it looks like the lock is always
taken by the local CPU so we should not get ASSERT false positives.

A generic percpu_rw_is_locked function would still be racy, maybe I should

> 
>>>> @@ -3180,7 +3178,7 @@ grant_table_create(
>>>>          goto no_mem_0;
>>>>  
>>>>      /* Simple stuff. */
>>>> -    rwlock_init(&t->lock);
>>>> +    percpu_rwlock_resource_init(&t->lock);
>>>
>>> Considering George's general comment (on patch 1), would it perhaps
>>> make sense to store (in debug builds) the per-CPU variable's offset
>>> in the lock structure, having percpu_{read,write}_lock() verify it? Or
>>> at the very least have gt_{read,write}_lock() wrappers, thus
>>> avoiding the need to explicitly pass grant_rwlock at each use site? I
>>> certainly agree with George that we should make it as hard as
>>> possible for someone to get using these constructs wrong.
>>
>> I can add storing the "per-CPU variable" owner in the local lock structure 
>> for
>> debug builds and have the percpu_{read,write} verify it at runtime.
>>
>> The downsides is that the size of structures will vary between debug and 
>> non-debug
>> builds and that perhaps users may be confused what the per-CPU variables are 
>> used for.
> 
> I'm not sure what confusion might arise here.

I was thinking there's a chance users might believe the per-CPU data are all 
somehow
stored in the percpu_rwlock_t itself. It's not a problem and I will implement 
it as
described above.
> 
>> I'll add the wrappers to for the grant table but will this make it harder to
>> determine how the locking works?
> 
> Why would it? At the call sites we don't really care about the
> mechanism. But perhaps I'm not seeing what you're thinking of...

I was thinking along the lines that it might appear the lock is for all usages 
in the
grant table code. I.E. We won't want people using the wrappers for locking of 
the
maptrack table itself. Ideally we want separate percpu variables for each 
"type" of
resource so that we don't get nested percpu locking of the different "types" 
which will
harm performance. Again, It's a major problem and I'm happy to add the wrappers.

Malcolm

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.