[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v6 19/19] xen/cpufreq: Adapt SET/GET_CPUFREQ_CPPC xen_sysctl_pm_op for amd-cppc driver


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: "Penny, Zheng" <penny.zheng@xxxxxxx>
  • Date: Thu, 14 Aug 2025 07:34:25 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=opEG/b95Dx49/uLGBUnqMVxA1WeEidt3Xws0ZLRxJrM=; b=vmAUaXr4S+2fUU5Z3EEU6qLDWVIpxgrR6OEzjTZehMKyNqMKt128zffWb7BlKCMB3MsGXvqeWiOr7HHS2HSz4qkoYaiONZiOpeYH7bHVNSne81i5i3p0w5wXykq8R6b7TDp6OPYR18k4xgryzmU2Ar4qVthq19YFZD0X5OS4m6rrEnOgsvy1ANfg+6QoXJyycc5tjb1FAeg7FZ/Ccru4Cf/XFrqvgTD3epKSjYGveaW9tka/wKiYQfciwQQn17NJ6NdDGkbqplFhOEYtLmer69lX46588HpR/yvO2W3dReRB9Bv36LQNKo9YbsQ5C+6QGsAL2x4eb7u4zzr/OEzBFA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=oqScEeAUpVm3+iVGP1r78VM1AZ3yAgzprTE9khwY+rM4LfyPFW5svpnuTZCnnIjMNu4A0TSwPMBI+eQ3jcATeQTjtAuvhHYBbMH5nvlrxmeHV5XO0vbDYi/fayieN0XKLCZD6UZkwA3SJ/WGSWFScK1oxts5pEU1YtbCNdi0oeWKUbZ6zCq7p9s2z9BNT/lfqldJQ8K5eXsPe4rbGrbcOUDvyp3Xnh72OP5O2gUjq41b2QFAAi60EDmHHgnYxNb2sKq348M+ReiPfFH7n8bCjWR8jzfbLxdMvxG5ajoOkB4TSL6Z2xlvpduu8oSolQXYLaxI4ZFvJdlAmet66zSzcg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: "Huang, Ray" <Ray.Huang@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, "Orzel, Michal" <Michal.Orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 14 Aug 2025 07:34:43 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Msip_labels: MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Enabled=True;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SetDate=2025-08-14T07:34:16.0000000Z;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Name=Open Source;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_ContentBits=3;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Method=Privileged
  • Thread-index: AQHb8hcuzjUv9QY8PkCazI2v8DQHmbRBbryAgCA3tFCAAEHpAIAADdDQ
  • Thread-topic: [PATCH v6 19/19] xen/cpufreq: Adapt SET/GET_CPUFREQ_CPPC xen_sysctl_pm_op for amd-cppc driver

[Public]

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: Thursday, August 14, 2025 2:40 PM
> To: Penny, Zheng <penny.zheng@xxxxxxx>
> Cc: Huang, Ray <Ray.Huang@xxxxxxx>; Anthony PERARD
> <anthony.perard@xxxxxxxxxx>; Andrew Cooper <andrew.cooper3@xxxxxxxxxx>;
> Orzel, Michal <Michal.Orzel@xxxxxxx>; Julien Grall <julien@xxxxxxx>; Roger Pau
> Monné <roger.pau@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; 
> xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [PATCH v6 19/19] xen/cpufreq: Adapt SET/GET_CPUFREQ_CPPC
> xen_sysctl_pm_op for amd-cppc driver
>
> On 14.08.2025 05:13, Penny, Zheng wrote:
> > [Public]
> >
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@xxxxxxxx>
> >> Sent: Thursday, July 24, 2025 10:44 PM
> >> To: Penny, Zheng <penny.zheng@xxxxxxx>
> >> Cc: Huang, Ray <Ray.Huang@xxxxxxx>; Anthony PERARD
> >> <anthony.perard@xxxxxxxxxx>; Andrew Cooper
> >> <andrew.cooper3@xxxxxxxxxx>; Orzel, Michal <Michal.Orzel@xxxxxxx>;
> >> Julien Grall <julien@xxxxxxx>; Roger Pau Monné
> >> <roger.pau@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>;
> >> xen- devel@xxxxxxxxxxxxxxxxxxxx
> >> Subject: Re: [PATCH v6 19/19] xen/cpufreq: Adapt
> SET/GET_CPUFREQ_CPPC
> >> xen_sysctl_pm_op for amd-cppc driver
> >>
> >> On 11.07.2025 05:51, Penny Zheng wrote:
> >>> Introduce helper set_amd_cppc_para() and get_amd_cppc_para() to
> >>> SET/GET CPPC-related para for amd-cppc/amd-cppc-epp driver.
> >>>
> >>> In get_cpufreq_cppc()/set_cpufreq_cppc(), we include
> >>> "processor_pminfo[cpuid]->init & XEN_CPPC_INIT" condition check to
> >>> deal with cpufreq driver in amd-cppc.
> >>>
> >>> Also, a new field "policy" has also been added in "struct 
> >>> xen_get_cppc_para"
> >>> to describe performance policy in active mode. It gets printed with
> >>> other cppc paras. Move manifest constants "XEN_CPUFREQ_POLICY_xxx"
> >>> to public header to let it be used in user space tools. Also add a
> >>> new anchor "XEN_CPUFREQ_POLICY_xxx" for array overrun check.
> >>
> >> If only they indeed had XEN_ prefixes.
> >>
> >>> Signed-off-by: Penny Zheng <Penny.Zheng@xxxxxxx>
> >>> ---
> >>> v1 -> v2:
> >>> - Give the variable des_perf an initializer of 0
> >>> - Use the strncmp()s directly in the if()
> >>> ---
> >>> v3 -> v4
> >>> - refactor comments
> >>> - remove double blank lines
> >>> - replace amd_cppc_in_use flag with XEN_PROCESSOR_PM_CPPC
> >>> ---
> >>> v4 -> v5:
> >>> - add new field "policy" in "struct xen_cppc_para"
> >>> - add new performamce policy XEN_CPUFREQ_POLICY_BALANCE
> >>> - drop string comparisons with "processor_pminfo[cpuid]->init &
> >> XEN_CPPC_INIT"
> >>> and "cpufreq.setpolicy == NULL"
> >>> - Blank line ahead of the main "return" of a function
> >>> - refactor comments, commit message and title
> >>> ---
> >>> v5 -> v6:
> >>> - remove duplicated manifest constants, and just move it to public
> >>> header
> >>> - use "else if" to avoid confusion that it looks as if both paths
> >>> could be taken
> >>> - add check for legitimate perf values
> >>> - use "unknown" instead of "none"
> >>> - introduce "CPUFREQ_POLICY_END" for array overrun check in user
> >>> space tools
> >>> +         (set_cppc->maximum > data->caps.highest_perf ||
> >>> +          set_cppc->maximum < data->caps.lowest_nonlinear_perf) )
> >>> +        return -EINVAL;
> >>> +    /*
> >>> +     * Minimum performance may be set to any performance value in the 
> >>> range
> >>> +     * [Nonlinear Lowest Performance, Highest Performance], inclusive but
> must
> >>> +     * be set to a value that is less than or equal to Maximum 
> >>> Performance.
> >>> +     */
> >>> +    if ( set_cppc->set_params & XEN_SYSCTL_CPPC_SET_MINIMUM &&
> >>> +         (set_cppc->minimum < data->caps.lowest_nonlinear_perf ||
> >>> +          (set_cppc->set_params & XEN_SYSCTL_CPPC_SET_MAXIMUM &&
> >>> +           set_cppc->minimum > set_cppc->maximum) ||
> >>> +          (!(set_cppc->set_params & XEN_SYSCTL_CPPC_SET_MAXIMUM)
> &&
> >>
> >> Hmm, I find this confusing to read, and was first thinking the ! was
> >> wrong here. Imo such is better expressed with the conditional operator:
> >>
> >>
> >>           set_cppc->minimum > (set_cppc->set_params &
> >> XEN_SYSCTL_CPPC_SET_MAXIMUM
> >>                                ? set_cppc->maximum
> >>                                : data->req.max_perf)
> >>
> >
> > Thx, understood!
> >
> >> Which also makes it easier to spot that here you use data->req, when
> >> in the minimum check you use data->caps. Why this difference?
> >>
> >
> >  minimum check has two boundary check, left boundary check is against
> > data->caps.lowest_nonlinear_perf. And right boundary check is against
> > data->req.max_perf. As it shall not only not larger than
> > caps.highest_perf , but also req.max_perf. The relation between
> > max_perf and highest_perf is validated in the maximum check. So here,
> > we are only considering max_perf
>
> I still don't get why one check is against capabilities (permitted values) 
> why the
> other is again what's currently set.

It needs to meet the following two criteria:

1. caps.lowest_nonlinear <= min_perf <= caps.highest_perf
2. min_perf <= max_perf. If users don't set max_perf at the same time, we are 
using the values stored in req.max_perf, which is the last setting.

>
> Jan

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.