[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2 09/13] libxl: introduce lock in libxl_ctx



Ian Campbell writes ("Re: [Xen-devel] [PATCH RFC v2 09/13] libxl: introduce 
lock in libxl_ctx"):
> On Fri, 2011-10-28 at 19:37 +0100, Ian Jackson wrote:
> > +    memcpy(&ctx->lock, &mutex_value, sizeof(ctx->lock));
> 
> Is this subtly different to
>        ctx->lock = PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP;
> in some way I'm missing?

I have added a comment about this.  (In my own tree; I'll repost the
series shortly.)

> > +    pthread_mutex_t lock; /* protects data structures hanging off the ctx 
> > */
> > +      /* always use MUTEX_LOCK and MUTEX_UNLOCK to manipulate this */
> 
> MUTEX is something of an implementation detail (although I admit we are
> unlikely to use anything else), perhaps CTX_(UN)LOCK?

Renamed.

> Perhaps give the variable a less obvious name to help discourage direct
> use?

I think the comment right next to it ought to be sufficient.

> > +#define MUTEX_LOCK do {                                 \
> > +        int mutex_r = pthread_mutex_lock(&CTX->lock);   \
> > +        assert(!mutex_r);                               \
> 
> This assert is to catch EINVAL ("the mutex has not been properly
> initialized") rather than EDEADLK ("the mutex is already locked by the
> calling thread") since we asked for a non-error-checking recursive lock?

Yes.

> Since it is OK to take this lock recursively then it might be as well to
> say so explicitly?
> 
> This is the first lock in libxl so I guess there isn't much of a locking
> hierarchy yet. Are there any particular considerations which a caller
> must make wrt its own locking?

I have added a comment explaining this.  No requirements are imposed
on libxl's caller.  (Other than the reentrancy ones on callbacks.)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.