[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH][cpufreq] Xen support for the ondemand governor [1/2] (hypervisor code)


  • To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, Mark Langsdorf <mark.langsdorf@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
  • Date: Wed, 24 Oct 2007 08:08:17 +0100
  • Delivery-date: Wed, 24 Oct 2007 00:03:39 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcgVv88VL5THxLfUTNGPaJBcSJlVYQAKUKmwAAjlNnA=
  • Thread-topic: [Xen-devel] [PATCH][cpufreq] Xen support for the ondemand governor [1/2] (hypervisor code)

On 24/10/07 04:08, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

>> Modify the cpufreq ondemand governor so that it can get idle and
>> total ticks from the Xen hypervisor.  Linux and Xen have different
>> ideas of what an idle tick is, so the Xen values for both have to
>> be returned in the same platform hypercall.
>> 
>> Signed-off-by: Mark Langsdorf <mark.langsdorf@xxxxxxx>
> 
> I would suggest adding bit mask info into getidletime, and then only
> fetching idle stats of concerned cpus. Currently [0-max_cpus] is
> overkill when on-demand governor only takes care of one cpu (hw
> coordination) or sibling cores (sw coordination).
> 
> Also there's no need to return total time for each concerned cpu. For
> sw coordination model, on-demand governor only runs on one cpu
> and getidletime is only called on that agent cpu which takes care of
> all the rest idle stats. Naturally elapsed cycles since last sample point
> should be same on all affected cpus and it's useless to cal for them
> individually. You just need to stamp NOW() for the sample point.

Both good suggestions. Taking a cpumask seems a good idea. I'll add that
myself.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.