[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH v6 19/19] xen/cpufreq: Adapt SET/GET_CPUFREQ_CPPC xen_sysctl_pm_op for amd-cppc driver
[Public] > -----Original Message----- > From: Jan Beulich <jbeulich@xxxxxxxx> > Sent: Thursday, July 24, 2025 10:44 PM > To: Penny, Zheng <penny.zheng@xxxxxxx> > Cc: Huang, Ray <Ray.Huang@xxxxxxx>; Anthony PERARD > <anthony.perard@xxxxxxxxxx>; Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; > Orzel, Michal <Michal.Orzel@xxxxxxx>; Julien Grall <julien@xxxxxxx>; Roger Pau > Monné <roger.pau@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; > xen- > devel@xxxxxxxxxxxxxxxxxxxx > Subject: Re: [PATCH v6 19/19] xen/cpufreq: Adapt SET/GET_CPUFREQ_CPPC > xen_sysctl_pm_op for amd-cppc driver > > On 11.07.2025 05:51, Penny Zheng wrote: > > Introduce helper set_amd_cppc_para() and get_amd_cppc_para() to > > SET/GET CPPC-related para for amd-cppc/amd-cppc-epp driver. > > > > In get_cpufreq_cppc()/set_cpufreq_cppc(), we include > > "processor_pminfo[cpuid]->init & XEN_CPPC_INIT" condition check to > > deal with cpufreq driver in amd-cppc. > > > > Also, a new field "policy" has also been added in "struct xen_get_cppc_para" > > to describe performance policy in active mode. It gets printed with > > other cppc paras. Move manifest constants "XEN_CPUFREQ_POLICY_xxx" to > > public header to let it be used in user space tools. Also add a new > > anchor "XEN_CPUFREQ_POLICY_xxx" for array overrun check. > > If only they indeed had XEN_ prefixes. > > > Signed-off-by: Penny Zheng <Penny.Zheng@xxxxxxx> > > --- > > v1 -> v2: > > - Give the variable des_perf an initializer of 0 > > - Use the strncmp()s directly in the if() > > --- > > v3 -> v4 > > - refactor comments > > - remove double blank lines > > - replace amd_cppc_in_use flag with XEN_PROCESSOR_PM_CPPC > > --- > > v4 -> v5: > > - add new field "policy" in "struct xen_cppc_para" > > - add new performamce policy XEN_CPUFREQ_POLICY_BALANCE > > - drop string comparisons with "processor_pminfo[cpuid]->init & > XEN_CPPC_INIT" > > and "cpufreq.setpolicy == NULL" > > - Blank line ahead of the main "return" of a function > > - refactor comments, commit message and title > > --- > > v5 -> v6: > > - remove duplicated manifest constants, and just move it to public > > header > > - use "else if" to avoid confusion that it looks as if both paths > > could be taken > > - add check for legitimate perf values > > - use "unknown" instead of "none" > > - introduce "CPUFREQ_POLICY_END" for array overrun check in user space > > tools > > + (set_cppc->maximum > data->caps.highest_perf || > > + set_cppc->maximum < data->caps.lowest_nonlinear_perf) ) > > + return -EINVAL; > > + /* > > + * Minimum performance may be set to any performance value in the range > > + * [Nonlinear Lowest Performance, Highest Performance], inclusive but > > must > > + * be set to a value that is less than or equal to Maximum Performance. > > + */ > > + if ( set_cppc->set_params & XEN_SYSCTL_CPPC_SET_MINIMUM && > > + (set_cppc->minimum < data->caps.lowest_nonlinear_perf || > > + (set_cppc->set_params & XEN_SYSCTL_CPPC_SET_MAXIMUM && > > + set_cppc->minimum > set_cppc->maximum) || > > + (!(set_cppc->set_params & XEN_SYSCTL_CPPC_SET_MAXIMUM) && > > Hmm, I find this confusing to read, and was first thinking the ! was wrong > here. Imo > such is better expressed with the conditional operator: > > > set_cppc->minimum > (set_cppc->set_params & > XEN_SYSCTL_CPPC_SET_MAXIMUM > ? set_cppc->maximum > : data->req.max_perf) > Thx, understood! > Which also makes it easier to spot that here you use data->req, when in the > minimum check you use data->caps. Why this difference? > minimum check has two boundary check, left boundary check is against data->caps.lowest_nonlinear_perf. And right boundary check is against data->req.max_perf. As it shall not only not larger than caps.highest_perf , but also req.max_perf. The relation between max_perf and highest_perf is validated in the maximum check. So here, we are only considering max_perf > Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |