[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch] support of lock profiling in Xen



On 08/10/2009 08:48, "Juergen Gross" <juergen.gross@xxxxxxxxxxxxxx> wrote:

> I'm not completely satisfied with the solution for the dynamically initialized
> locks, but I had no better idea in the first run.
> Another enhancement would be to expand the profiling to rw-locks as well, but
> this would have required a rewrite of the lock routines using try_lock like
> for spinlocks. I could do this if the lock profiling is accepted.
> 
> Comments welcome :-)

The method of chaining and initialising the info is kind of icky. Requiring
users to unchain dynamic locks is just asking for this support to be
perpetually broken, or be used only for static locks.

How about defining new initialisers DEFINE_NAMED_SPINLOCK() and
named_spinlock_init(). This would indicate you consider a lock important
enough to name (and hence profile) and also categorise dynamically-allocated
locks, causing their stats to be aggregated (after all, lock optimisations
will have aggregate effect across all locks of that category).

If lock profiling is compiled in, have a static array of lock-profile
descriptors (keeps things simple - could make it a growable array, or
something). On lock init, walk the array looking for that name. If found,
write the entry index into a new field in the spinlock struct. If not found,
allocate next entry in lock-profile array, init it, and write its index into
the spinlock struct.

On lock operations, if the index field in the spinlock is non-zero, update
stats in the associated profile structure. Regarding races from locks
aliasing to the same profile structure -- either assume that doesn't matter
much, or perhaps update fields with atomic ops.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.