[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC/RFT][PATCH 1 of 3] Move locking into pluggable schedulers.



On Wed, 2011-11-23 at 16:24 +0000, Ian Campbell wrote:
> On Wed, 2011-11-23 at 15:07 +0000, Dario Faggioli wrote:
> > Pluggable schedulers' code knows what (and when) to lock better than
> > generic code, let's delegate to them all the concurrency related issues.
> > 
> > Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> > 
> > diff -r 84b3e46aa7d2 xen/common/sched_credit.c
> > --- a/xen/common/sched_credit.c     Wed Nov 23 12:03:37 2011 +0000
> > +++ b/xen/common/sched_credit.c     Wed Nov 23 15:09:14 2011 +0100
> > @@ -161,6 +161,7 @@ struct csched_dom {
> >   * System-wide private data
> >   */
> >  struct csched_private {
> > +    /* lock for the whole pluggable scheduler, nests inside cpupool_lock */
> >      spinlock_t lock;
> >      struct list_head active_sdom;
> >      uint32_t ncpus;
> 
> Given that every scheduler plugin is going to need this lock perhaps it
> could be provided globally (but still have the responsibility for using
> it fall on the plugin)?

Sorry for the long delay in responding... I don't really like this idea.
For one thing, that would make the generic scheduler code responsible
for making per-cpupool locks, which could look messy, and adds to code
complexity. There's already code to allocate per-pool data structures
for the schedulers -- it seems like just leveraging that would be
easiest.  I also think that logically, having each scheduler in charge
of its own locking makes more sense; having the generic scheduling code
do stuff on behalf of the pluggable scheduler is what caused this
problem in the first place.

If we think having a one-item struct is ugly, could we just use
spinlock_t as the type we allocate / cast to in the sedf scheduler?

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.