[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/6] x86/msr: Introduce msr_{set, clear}_bits() helpers



On Tue, Jun 26, 2018 at 07:22:44PM +0100, Andrew Cooper wrote:
> One reoccuring code pattern is to read an MSR, modify one or more bits,
> and write the result back.  Introduce helpers for this purpose.
> 
> First, introduce rdmsr_split() and wrmsr_split() which are tiny static inline
> wrappers which deal with the MSR value in two 32bit halves.

I think this needs some kind of explanation, since rdmsr/wrmsr already deal
with MSR in two 32bit halves.

> Next, construct msr_{set,clear}_bits() in terms of the {rdmsr,wrmsr}_split().
> The mask operations are deliberately performed as 32bit operations, because
> all callers pass in a constant to the mask parameter, and in all current
> cases, one of the two operations can be elided.
> 
> For MSR_IA32_PSR_L3_QOS_CFG, switch PSR_L3_QOS_CDP_ENABLE from being a bit
> position variable to being a plain number.
> 
> The resulting C is shorter, and doesn't require a temporary variable.  The
> generated ASM is also more efficient, because of avoiding the
> packing/unpacking operations.  e.g. the delta in the first hunk is from:
> 
>   b9 1b 00 00 00          mov    $0x1b,%ecx
>   0f 32                   rdmsr
>   48 c1 e2 20             shl    $0x20,%rdx
>   48 09 d0                or     %rdx,%rax
>   80 e4 f3                and    $0xf3,%ah
>   48 89 c2                mov    %rax,%rdx
>   48 c1 ea 20             shr    $0x20,%rdx
>   0f 30                   wrmsr
> 
> to:
> 
>   b9 1b 00 00 00          mov    $0x1b,%ecx
>   0f 32                   rdmsr
>   80 e4 f3                and    $0xf3,%ah
>   0f 30                   wrmsr
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

If the intention behind introducing the _split helpers is detailed in
the commit message.

> diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c 
> b/xen/arch/x86/cpu/mcheck/mce_intel.c
> index 2d285d0..c5f171d 100644
> --- a/xen/arch/x86/cpu/mcheck/mce_intel.c
> +++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
> @@ -164,11 +164,8 @@ static void intel_init_thermal(struct cpuinfo_x86 *c)
>      val |= (APIC_DM_FIXED | APIC_LVT_MASKED);  /* we'll mask till we're 
> ready */
>      apic_write(APIC_LVTTHMR, val);
>  
> -    rdmsrl(MSR_IA32_THERM_INTERRUPT, msr_content);
> -    wrmsrl(MSR_IA32_THERM_INTERRUPT, msr_content | 0x03);
> -
> -    rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
> -    wrmsrl(MSR_IA32_MISC_ENABLE, msr_content | (1ULL<<3));
> +    msr_set_bits(MSR_IA32_THERM_INTERRUPT, 0x3);
> +    msr_set_bits(MSR_IA32_MISC_ENABLE, 1 << 3);
>  
>      apic_write(APIC_LVTTHMR, val & ~APIC_LVT_MASKED);
>      if ( opt_cpu_info )
> diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
> index 09bb3f4..6619af9 100644
> --- a/xen/arch/x86/efi/efi-boot.h
> +++ b/xen/arch/x86/efi/efi-boot.h
> @@ -229,18 +229,15 @@ static void __init efi_arch_pre_exit_boot(void)
>  
>  static void __init noreturn efi_arch_post_exit_boot(void)
>  {
> -    u64 cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, efer;
> +    bool nx = cpuid_ext_features & cpufeat_mask(X86_FEATURE_NX);
> +    uint64_t cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, tmp;
>  
>      efi_arch_relocate_image(__XEN_VIRT_START - xen_phys_start);
>      memcpy((void *)trampoline_phys, trampoline_start, cfg.size);
>  
>      /* Set system registers and transfer control. */
>      asm volatile("pushq $0\n\tpopfq");
> -    rdmsrl(MSR_EFER, efer);
> -    efer |= EFER_SCE;
> -    if ( cpuid_ext_features & cpufeat_mask(X86_FEATURE_NX) )
> -        efer |= EFER_NXE;
> -    wrmsrl(MSR_EFER, efer);
> +    msr_set_bits(MSR_EFER, EFER_SCE | (nx ? EFER_NXE : 0));

I think you can directly use cpu_has_nx?

Also isn't NX always present on amd64 capable CPUs?

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.