[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH v9 1/8] xen/cpufreq: embed hwp into struct cpufreq_policy{}
[Public] > -----Original Message----- > From: Jan Beulich <jbeulich@xxxxxxxx> > Sent: Monday, September 8, 2025 6:02 PM > To: Penny, Zheng <penny.zheng@xxxxxxx> > Cc: Andryuk, Jason <Jason.Andryuk@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx > Subject: Re: [PATCH v9 1/8] xen/cpufreq: embed hwp into struct > cpufreq_policy{} > > (re-adding the list) > > On 05.09.2025 06:58, Penny, Zheng wrote: > > [Public] > > > >> -----Original Message----- > >> From: Jan Beulich <jbeulich@xxxxxxxx> > >> Sent: Thursday, September 4, 2025 7:51 PM > >> To: Penny, Zheng <penny.zheng@xxxxxxx>; Andryuk, Jason > >> <Jason.Andryuk@xxxxxxx> > >> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné > >> <roger.pau@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx > >> Subject: Re: [PATCH v9 1/8] xen/cpufreq: embed hwp into struct > >> cpufreq_policy{} > >> > >> On 04.09.2025 08:35, Penny Zheng wrote: > >>> For cpus sharing one cpufreq domain, cpufreq_driver.init() is only > >>> invoked on the firstcpu, so current per-CPU hwp driver data struct > >>> hwp_drv_data{} actually fails to be allocated for cpus other than > >>> the first one. There is no need to make it per-CPU. > >>> We embed struct hwp_drv_data{} into struct cpufreq_policy{}, then > >>> cpus could share the hwp driver data allocated for the firstcpu, > >>> like the way they share struct cpufreq_policy{}. We also make it a > >>> union, with "hwp", and later "amd-cppc" as a sub-struct. > >> > >> And ACPI, as per my patch (which then will need re-basing). > >> > >>> Suggested-by: Jan Beulich <jbeulich@xxxxxxxx> > >> > >> Not quite, this really is Reported-by: as it's a bug you fix, and in > >> turn it also wants to gain a Fixes: tag. This also will need backporting. > >> > >> It would also have been nice if you had Cc-ed Jason right away, > >> seeing that this code was all written by him. > >> > >>> @@ -259,7 +258,7 @@ static int cf_check hwp_cpufreq_target(struct > >> cpufreq_policy *policy, > >>> unsigned int relation) { > >>> unsigned int cpu = policy->cpu; > >>> - struct hwp_drv_data *data = per_cpu(hwp_drv_data, cpu); > >>> + struct hwp_drv_data *data = policy->u.hwp; > >>> /* Zero everything to ensure reserved bits are zero... */ > >>> union hwp_request hwp_req = { .raw = 0 }; > >> > >> Further down in this same function we have > >> > >> on_selected_cpus(cpumask_of(cpu), hwp_write_request, policy, 1); > >> > >> That's similarly problematic when the CPU denoted by policy->cpu > >> isn't online anymore. (It's not quite clear whether all related > >> issues would want fixing together, or in multiple patches.) > > > > Checking the logic in cpufreq_del_cpu(), once any processor in the > > domain gets offline, the governor will stop. > > Yet with HWP and CPPC drivers being governor-less, how would that matter? In CPPC passive mode, we are still relying on governor to do performance tuning. In CPPC active mode, yes, it is governor-less, how about we disable the CPPC-enable bit for the offline cpus? > still be effective. Which is also complies to the description in _PSD ACPI > SPEC for > "Num Processors" [1]: > > ``` > > The number of processors belonging to the domain for this logical > > processor’s P- > states. OSPM will not start performing power state transitions to a > particular P-state > until this number of processors belonging to the same domain have been > detected > and started. > > ``` > > [1] > > https://uefi.org/specs/ACPI/6.5/08_Processor_Configuration_and_Control > > .html?highlight=cppc#pstatedependency-package-values > > > > I know that AMD CPPC is obeying the _PSD dependency relation too, even if > both CPPC Request register (ccd[15:0]_lthree0_core[7:0]_thread[1:0]; > MSRC001_02B3) and CPPC Capability Register > (_ccd[15:0]_lthree0_core[7:0]_thread[1:0]; MSRC001_02B0) is Per-thread MSR. > > I don't have the hardware to test "sharing" logic. All my platform says > > "HW_ALL" > in _PSD. > > Aiui that's not mandated by the CPU spec, though. Plus HW_ALL still doesn't > say > anything about the scope/size of each domain. > > Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |