[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Problems with spin_lock_irqsave() and for_each_online_cpu()


I was triaging a Coverity issue (1055455) which was complaining about
spinlock inbalance in common/trace.c:424.

First of all, there is a latent bug here using "int flags" in a
irqsave/irqrestore.  It will be safe until bit 31 of RFLAGS is defined,
but will introduce subtle corruption thereafter.

This bug was not caught by the compiler because of the
spin_lock_irqsave() macro which has slightly non-function-like
semantics.  Would it be acceptable to change spin_lock_irqsave() into a
static inline so can be properly typed?  (This would come with a huge
amount of code churn as the function would have to take flags by pointer)

As for the spinlocks, while I as the programmer am fairly sure that
"&per_cpu(t_lock, i)" is unlikely to change, coverity takes into account
that updates to __per_cpu_offset[i] have no protection.  Because of
RELOC_HIDE(), gcc can't hoist the per_cpu() calculation, but will run
from register cached values of per_cpu__t_lock and __per_cpu_offset[i],
so is guaranteed to find the same lock both times even if
__per_cpu_offset[i] changes.

However, it occurs to me that there is no protection between
for_each_online_cpu() finding a bit set in the cpu_online_map and safely
using per_cpu() to poke at another cpus data.  One solution seems to be
to make use of {get,put}_cpu_maps() but this seems overkill especially
as the innards of a for_each_online_cpu() loop only care about one
particular pcpu not disappearing under its feet.

Would it be sensible to have RW locks for each pcpu (specifically not a
per_cpu lock - array of size nr_cpu_ids probably) specifically for the
purpose of playing with another pcpus data?  Hopefully this shouldn't
cause too much overhead, but there are a large number of
for_each_online_cpu() calls which would need protecting.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.