[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/5] cpufreq/hwp: move driver data into policy


  • To: Jason Andryuk <jason.andryuk@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 26 Jan 2026 10:08:02 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 26 Jan 2026 09:08:10 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 23.01.2026 23:35, Jason Andryuk wrote:
> On 2026-01-22 04:42, Jan Beulich wrote:
>> Share space with the ACPI and powernow drivers, avoiding a separate
>> allocation for each CPU. Except for get_hwp_para() all use sites already
>> have the policy available, and this one function can simply be brought
>> closer to its sibling set_hwp_para().
> 
> Minor, but maybe 's/function/function's signature/'.  The original 
> phrasing made me think it was code movement.

I don't see an issue there, but I've adjusted as you asked for.

>> This then also eliminates the concern over hwp_cpufreq_cpu_init() being
>> called for all CPUs
> 
> We expect hwp_cpufreq_cpu_init() to be called for each CPU, so I am 
> confused by this statement.  The data...

Well, "expect" is the problem. Recall my pointing out of the problem when
having noticed the same pattern in the amd-cppc driver patches. My
recollection from the discussion is that there's no formal statement of
...

>  >, or a CPU going offline that's recorded in> policy->cpu (which would 
> result in accesses of per-CPU data of offline
>> CPUs).
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> hwp_cpufreq_target() still requires policy->cpu to be online, though.
>>
>> --- a/xen/arch/x86/acpi/cpufreq/hwp.c
>> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> 
>> -static DEFINE_PER_CPU_READ_MOSTLY(struct hwp_drv_data *, hwp_drv_data);
> 
> ... here is tracked and filled per-CPU.
> 
> Do we need cpufreq_add_cpu() to force hw_all = 1 for HWP (and maybe 
> AMD-CPPC) to ensure that policy is allocated per-CPU?

... this being a correct thing to do, hence our code imo would better be
resilient to it being different somewhere.

> Are we implicitly relying on shared_type == CPUFREQ_SHARED_TYPE_HW to do 
> that for us?

Right now we do, I believe, without - as said above - this being actually
mandated by the spec.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.