[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] use per-cpu variables in cpufreq
On 06/10/11 21:00, Langsdorf, Mark wrote: After Keir's comments downthread, are we going to see a fresh patch for review? Yes, I think I'll have some time this week to do this. Sorry for the delay... Juergen --Mark Langsdorf Operating System Research Center AMD-----Original Message----- From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Juergen Gross Sent: Friday, May 27, 2011 6:11 AM To: xen-devel@xxxxxxxxxxxxxxxxxxx Subject: [Xen-devel] [PATCH] use per-cpu variables in cpufreq The cpufreq driver used some local arrays indexed by cpu number. This patch replaces those arrays by per-cpu variables. The AMD and INTEL specific parts used different per-cpu data structures with nearly identical semantics. Fold the two structures into one by adding a generic architecture data item. Signed-off-by: juergen.gross@xxxxxxxxxxxxxx 8 files changed, 58 insertions(+), 66 deletions(-) xen/arch/x86/acpi/cpufreq/cpufreq.c | 36 ++++++++++++------------ xen/arch/x86/acpi/cpufreq/powernow.c | 43 +++++++++++------------------ xen/drivers/acpi/pmstat.c | 6 ++-- xen/drivers/cpufreq/cpufreq.c | 24 ++++++++-------- xen/drivers/cpufreq/cpufreq_ondemand.c | 2 - xen/drivers/cpufreq/utility.c | 8 ++--- xen/include/acpi/cpufreq/cpufreq.h | 3 +- xen/include/acpi/cpufreq/processor_perf.h | 2 -_______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel -- Juergen Gross Principal Developer Operating Systems TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967 Fujitsu Technology Solutions e-mail: juergen.gross@xxxxxxxxxxxxxx Domagkstr. 28 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |