[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 01/14] x86/cpu: Create Hygon Dhyana architecture support file



>>> On 25.03.19 at 14:29, <puwen@xxxxxxxx> wrote:
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -32,11 +32,6 @@
>  static char __initdata opt_famrev[14];
>  string_param("cpuid_mask_cpu", opt_famrev);
>  
> -static unsigned int __initdata opt_cpuid_mask_l7s0_eax = ~0u;
> -integer_param("cpuid_mask_l7s0_eax", opt_cpuid_mask_l7s0_eax);
> -static unsigned int __initdata opt_cpuid_mask_l7s0_ebx = ~0u;
> -integer_param("cpuid_mask_l7s0_ebx", opt_cpuid_mask_l7s0_ebx);

This is no longer needed - all references are now local to amd.c.

> @@ -116,6 +121,9 @@ bool __init probe_cpuid_faulting(void)
>       uint64_t val;
>       int rc;
>  
> +     if(boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
> +             return false;
> +
>       if ((rc = rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val)) == 0)

Isn't this similarly true for AMD, in which case adding both at the
same time in a separate patch would seem better? Yet then again -
did you look at the description of the commit moving the function
here (6e2fdc0f89 ["x86: Common cpuid faulting support"])? Hence
if anything this would need qualifying by !cpu_has_hypervisor.

Also the contextual if() tells you that there's a blank missing in the
one you add. However, there's a wider style question to raise:
This file is not a Linux clone, so generally I'd expect it to be
written in plain Xen style.

> +static void early_init_hygon(struct cpuinfo_x86 *c)
> +{
> +     early_init_amd(c);
> +}

Why the wrapper function? It can be introduced once you actually
need to do more than just the call.

> +static void init_hygon(struct cpuinfo_x86 *c)
> +{
> +     u32 l, h;

As mentioned before, please prefer uint32_t et al over u32 and
friends. While that's applicable to the entire series (and then
also to use basic types in preference to the fixed width ones,
where possible), in this particular case it would be even better if
you dropped the variables altogether, using ...

> +     unsigned long long value;
> +
> +     /* Attempt to set LFENCE to be Dispatch Serialising. */
> +     if (rdmsr_safe(MSR_AMD64_DE_CFG, value))
> +             /* Unable to read.  Assume the safer default. */
> +             __clear_bit(X86_FEATURE_LFENCE_DISPATCH, c->x86_capability);
> +     if (value & AMD64_DE_CFG_LFENCE_SERIALISE)
> +             /* Dispatch Serialising. */
> +             __set_bit(X86_FEATURE_LFENCE_DISPATCH, c->x86_capability);
> +
> +     /*
> +      * If the user has explicitly chosen to disable Memory Disambiguation
> +      * to mitigiate Speculative Store Bypass, poke the appropriate MSR.
> +      */
> +     if (opt_ssbd && !rdmsr_safe(MSR_AMD64_LS_CFG, value)) {
> +             value |= 1ull << 10;
> +             wrmsr_safe(MSR_AMD64_LS_CFG, value);
> +     }
> +
> +     display_cacheinfo(c);
> +
> +     if (cpu_has(c, X86_FEATURE_ITSC)) {
> +             __set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
> +             __set_bit(X86_FEATURE_NONSTOP_TSC, c->x86_capability);
> +             __set_bit(X86_FEATURE_TSC_RELIABLE, c->x86_capability);
> +     }
> +
> +     c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
> +
> +     hygon_get_topology(c);
> +
> +     /* Hygon CPUs do not support SYSENTER outside of legacy mode. */
> +     __clear_bit(X86_FEATURE_SEP, c->x86_capability);
> +
> +     /* Hygon processors have APIC timer running in deep C states. */
> +     if (opt_arat)
> +             __set_bit(X86_FEATURE_ARAT, c->x86_capability);
> +
> +     if (cpu_has(c, X86_FEATURE_EFRO)) {
> +             rdmsr(MSR_K7_HWCR, l, h);
> +             l |= (1 << 27); /* Enable read-only APERF/MPERF bit */
> +             wrmsr(MSR_K7_HWCR, l, h);
> +     }

... "value" and rdmsrl() / wrmsrl() here instead.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.