[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/3] x86/cpuid: Disentangle logic for new feature leaves


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 27 Jan 2022 17:39:21 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tj2ke7WSkHngM11L1o1J/VnnjX97LkyHEKI8AW/no1k=; b=HJTS/Zh4avP3GHf9kJ1Izrze4kzWCmLbBLJsC1M1iu/SO8wqQhLVYvcLLdkl2tgvbIYG+G2OOMb45CxwbFGHzLtlce4N8k3oG7jRo4cVfqZon3XqJUjdJlOIZFBjPTkdLt4LP3MrJQ/NJlPwzhNqUUcqMqlv/G8L45Anwfx0fnco9yTGUfG2dhKLgB23yBVAGin2tA/y+rWdBQDYk4HchSE2c7D2l879joE9LrSrDgDpZ/hlb/ktYsiZiJH6Mof6QyKGyTp+MBw57OBY1wiIVHDl1v/J2eJPiFfIOK7Yk7M/5Vg9MNzbcbH+wwti7nf3rl37jo4QRXpvcWcZkAOUgA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z36YBVAR6QYkNkZmPQ+gMCRH+DV0PhA9jn7YfdeQrKcuhkWvMGVIss9nQkxNME9l3z67VMGkK7URJ5PmGDTH6H4tXzG4J03M4lRio0DNMFhWvRd98xV07VlSyQwgFh4bVVVif+mL4UihVWtOIyxrxdeDxAB0qXCefKuLm7nwfuHn61uElcEAlOLtU6idsobuLAygYFC/NQJsdJBoYy7ut4zEUv3cm5XpZLTv39h/xh7H2yNoH8ZVfkBcuTjHJT7qMd9HL/s32wLxMiRbEfC9mgdqGkXZOFdEorwA86JuJY73b6J7J7v9GMs8GK+bDWOwVJm0A+Ap6Oh4xLgWix9f9Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 27 Jan 2022 16:39:36 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 27.01.2022 17:09, Andrew Cooper wrote:
> Adding a new feature leaf is a reasonable amount of boilerplate and for the
> patch to build, at least one feature from the new leaf needs defining.  This
> typically causes two non-trivial changes to be merged together.
> 
> First, have gen-cpuid.py write out some extra placeholder defines:
> 
>   #define CPUID_BITFIELD_11 bool :1, :1, lfence_dispatch:1, ...
>   #define CPUID_BITFIELD_12 uint32_t :32 /* placeholder */
>   #define CPUID_BITFIELD_13 uint32_t :32 /* placeholder */
>   #define CPUID_BITFIELD_14 uint32_t :32 /* placeholder */
>   #define CPUID_BITFIELD_15 uint32_t :32 /* placeholder */
> 
> This allows DECL_BITFIELD() to be added to struct cpuid_policy without
> requiring a XEN_CPUFEATURE() declared for the leaf.  The choice of 4 is
> arbitrary, and allows us to add more than one leaf at a time if necessary.
> 
> Second, rework generic_identify() to not use feature leaf names.
> 
> The choice of deriving the index from a feature was to avoid mismatches, but
> its correctness depends on bugs like c/s 249e0f1d8f20 ("x86/cpuid: Fix
> TSXLDTRK definition") not happening.
> 
> Switch to using FEATURESET_* just like the policy/featureset helpers.  This
> breaks the cognitive complexity of needing to know which leaf a specifically
> named feature should reside in, and is shorter to write.  It is also far
> easier to identify as correct at a glance, given the correlation with the
> CPUID leaf being read.
> 
> In addition, tidy up some other bits of generic_identify()
>  * Drop leading zeros from leaf numbers.
>  * Don't use a locked update for X86_FEATURE_APERFMPERF.
>  * Rework extended_cpuid_level calculation to avoid setting it twice.
>  * Use "leaf >= $N" consistently so $N matches with the CPUID input.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

I will admit that I'm not sure yet whether I will want to break up the
KeyLocker patch just like you did with the PPIN one.

The one thing that worries me some that there's still the unvalidated
connection between FEATURESET_* and the numbers used in the public
cpufeatureset.h. But I have no good idea (yet) for a build-time check.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.