[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/6] x86/msr: Clean up the x2APIC MSR constants



>>> On 26.06.18 at 15:18, <andrew.cooper3@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2995,19 +2995,19 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
>                  SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
>              if ( cpu_has_vmx_apic_reg_virt )
>              {
> -                for ( msr = MSR_IA32_APICBASE_MSR;
> -                      msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
> +                for ( msr = MSR_X2APIC_FIRST;
> +                      msr <= MSR_X2APIC_FIRST + 0xff; msr++ )

With the comment mad on the odd upper bound here, and ...

> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -49,6 +49,16 @@
>  #define MSR_MISC_FEATURES_ENABLES       0x00000140
>  #define MISC_FEATURES_CPUID_FAULTING    (_AC(1, ULL) <<  0)
>  
> +#define MSR_X2APIC_FIRST                0x00000800
> +#define MSR_X2APIC_LAST                 0x00000bff

... with you having made clear yourself that there are non-x2APIC
MSRs in this range on at least some models, wouldn't we be better off
with a lower upper bound here, perhaps with a comment explaining
the difference to the theoretical upper bound? At the very least I'd
find it rather helpful for the 0xff above to go away; iirc you've said
you have a clever idea there.

> +#define MSR_X2APIC_TPR                  0x00000808
> +#define MSR_X2APIC_PPR                  0x0000080a
> +#define MSR_X2APIC_EOI                  0x0000080b
> +#define MSR_X2APIC_TMICT                0x00000838
> +#define MSR_X2APIC_TMCCT                0x00000839
> +#define MSR_X2APIC_SELF                 0x0000083f

Take the opportunity and complete the set?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.