[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v2 10/12] cpufreq: add dom0-cpufreq driver



On Fri, Oct 17, 2014 at 4:03 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 16.10.14 at 13:27, <oleksandr.dmytryshyn@xxxxxxxxxxxxxxx> wrote:
>> This driver uses Dom0 to change frequencies on CPUs
>
> This surely is too little of an explanation, not the least because this
> approach is still being argued about. Regardless of me not being in
> favor of the approach, a couple of comments:
>
>> +#include <xen/types.h>
>> +#include <xen/errno.h>
>> +#include <xen/sched.h>
>> +#include <xen/event.h>
>> +#include <xen/irq.h>
>> +#include <xen/spinlock.h>
>> +#include <xen/cpufreq.h>
>> +#include <asm/current.h>
>> +
>> +struct acpi_cpufreq_data *cpufreq_drv_data[NR_CPUS];
>
> static (and renamed) or you need to abstract out the variable from
> the other driver. In no case should there be two definitions of the
> same variable.
>
>> +static void notify_cpufreq_domains(void)
I'll do this in the next patch-set

> Why "domains", not "hwdom"? You don't really want to send this
> to other than the hardware domain I hope.
All domains (not only hwdomain) should receive this interrupt.
In case is hwdomain is Linux kernel it can automaticaly recalculate
jiffies. But other domains should recalculate jiffies too. I'll
implement this mechanism a bit later.

>> +    /* Do send cmd for Dom0 */
>> +    spin_lock(&sysctl_cpufreq_lock);
>> +    /* return previous result */
>> +    ret = sysctl_cpufreq_data.result;
>> +
>> +    sysctl_cpufreq_data.cpu = policy->cpu;
>> +    sysctl_cpufreq_data.freq = freqs.new;
>> +    sysctl_cpufreq_data.relation = (uint32_t)relation;
>> +    spin_unlock(&sysctl_cpufreq_lock);
>
> sysctl_cpufreq_data fields appear to be accessed only here. Is the
> patch incomplete? It is certainly not possible to understand how this
> is intended to work without the other sides being there.
Now sysctl_cpufreq_data fields is accessed only here. But it will be accessed
in XEN_SYSCTL_cpufreq_op handler (in the next patch in this series).

>> +static int
>> +dom0_cpufreq_cpu_init(struct cpufreq_policy *policy)
I'll fix this in the next patch-set

> Please avoid the term dom0 where possible, and where not truly
> meaning dom0. The tree got switched to use hwdom instead a
> while ago.
>
>> +    data->freq_table = xmalloc_array(struct cpufreq_frequency_table,
>> +                                    (perf->state_count+1));
>
> Why count+1 here but ...
count+1 because last field of the freq table should be reserved for
CPUFREQ_TABLE_END.

>> +    if ( !data->freq_table )
>> +    {
>> +        ret = -ENOMEM;
>> +        goto err_unreg;
>> +    }
>> +
>> +    /* detect transition latency */
>> +    policy->cpuinfo.transition_latency = 0;
>> +    for ( i=0; i<perf->state_count; i++ )
>
> ... iterating up to count only here? Also there are a number of blanks
> missing in this for() (also again below).
Iterating up to count because we have maximum 'count' valid states + additional
field for CPUFREQ_TABLE_END.

>> +err_freqfree:
>> +    xfree(data->freq_table);
>> +err_unreg:
>
> Labels indented by at least one space please.
I'll fix this in the next patch-set

>> --- a/xen/include/public/xen.h
>> +++ b/xen/include/public/xen.h
>> @@ -160,6 +160,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>>  #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured          
>>  */
>>  #define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                    
>>  */
>>  #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
>> +#define VIRQ_CPUFREQ    13 /* G. (DOM0) Notify dom0-cpufreq driver.         
>>   */
>
> I think with vPMU already wanting to use 13 you should pick 14 right
> away so that your Dom0 side code doesn't become incompatible later.
I'll fix this in the next patch-set

> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.