[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH][RFC] consider vcpu-pin weight on CreditScheduler TAKE2



Hi, Emmanuel

 Thank you for commenting my patch.
 I am waiting for this kind of discussion.

My patch is a kind of full-set solution for this vcpu-pin weight issue.
Of couse, your suggested complex configuration considers it.
Since my patch is calculate each pcpu credit.
(It makes a kind of long code((+)365line)).

Anyway your suggested, 
Segmenting pcpu is another useful solution.
Since in typical use, seg-1 for dev seg-2 for rest.
But flexibility is reduced from previous one.
In this case vcpu-pin cannot define over the multiple segmentation.

Which is the best way to solve?

Thanks
Atsushi SAKAI


Emmanuel Ackaouy <ackaouy@xxxxxxxxx> wrote:

> I think this patch is too large and intrusive in the common paths.
> I understand the problem you are trying to fix. I don't think it is
> serious enough to call for such a large change. The accounting
> code is already tricky enough, don't you think? If you reduce the
> scope of the problem you're addressing, I think we should be
> able to get a much smaller, cleaner and robust change in place.
> 
> There are many different scenarios when using pinning that
> screws with set weights. Have you considered them all?
> 
> For example:
> 
> VCPU0.0:0-1, VCPU0.1:1-2 weight 256
> VCPU1.0-0-2, VCPU1.1:0-2 weight 512
> 
> Does your patch deal with cases when there are multiple
> domains with multiple VCPUs each and not all sharing the
> same cpu affinity mask? I'm not even sure myself what
> should happen in some of these situations...
> 
> I argue that the general problem isn't important to solve. The
> interesting problem is a small subset: When a set of physical
> CPUs are set aside for a specific group of domains, setting
> weights for those domains should behave as expected. For
> example, on an 8way host, you could set aside 2CPUs for
> development work and assign different weights to domains
> running in that dev group. You would expect the weights to
> work normally.
> 
> The best way to do this though is not to screw around with
> weights and credit when VCPUs are pinned. The cleanest
> modification is to run distinct credit schedulers: 1 for dev on
> 2CPUs, and 1 for the rest.
> 
> You could probably achieve this in a much smaller patch which
> would include administrative interfaces for creating and destroying
> these dynamic CPU partition groups as well assigning domains to
> them.
> 
> On Jun 27, 2007, at 9:58, Atsushi SAKAI wrote:
> 
> > Hi, Keir
> >
> > This patch intends
> > to consider vcpu-pin weight on credit scheduler TAKE2.
> > http://lists.xensource.com/archives/html/xen-devel/2007-06/ 
> > msg00359.html
> >
> > The difference from previous one is
> > 1) Coding style clean up
> > 2) Skip loop for unused vcpu-pin-count.
> > 3) Remove if pin_count ==1 in multiple loop.
> >    Then pin_count ==1 is another loop.
> >
> > Signed-off-by: Atsushi SAKAI <sakaia@xxxxxxxxxxxxxx>
> >
> > And one question,
> > Does this patch need following tune up for reducing multiple loop?
> >
> >> From following
> >
> > -  /* sort weight */
> > -  for(j=0;j<pin_count;j++)
> > -  {
> > -      sortflag = 0;
> > -      for(k=1;k<pin_count;k++)
> > -      {
> > -          if ( pcpu_weight[pcpu_id_list[k-1]] >  
> > pcpu_weight[pcpu_id_list[k]]
> > )
> > -          {
> > -              sortflag = 1;
> > -              pcpu_id_handle  = pcpu_id_list[k-1];
> > -              pcpu_id_list[k-1] = pcpu_id_list[k];
> > -              pcpu_id_list[k]   = pcpu_id_handle;
> > -          }
> > -      }
> > -      if( sortflag == 0)break;
> > -  }
> >
> > To following
> >
> > +     /* sort weight */
> > +     for(k=1;k<pin_count;k++)
> > +     {
> > +          if ( pcpu_weight[pcpu_id_list[k-1]] >  
> > pcpu_weight[pcpu_id_list[k]]
> > )
> > +          {
> > +              pcpu_id_handle  = pcpu_id_list[k-1];
> > +              pcpu_id_list[k-1] = pcpu_id_list[k];
> > +              pcpu_id_list[k]   = pcpu_id_handle;
> > +              if (k > 1) k -= 2;
> > +           }
> > +     }
> >
> >
> > Thanks
> > Atsushi SAKAI
> >
> >
> > <vcpupinweight0627.patch>______________________________________________ 
> > _
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.