[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3] x86/CPUID: shrink max_{,sub}leaf fields according to actual leaf contents


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 19 Apr 2021 14:09:18 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gFrDvQtT16meA9IXm631mZxvxWqIJj2tLdTG0EEf+48=; b=hltnYRqiXDrMLmInIgYMmkyhmH70S5IVYkIFX7yj9xk6pkm/s7b7QwGhkLBCeY2xzigcl5W/3XD+HfMEg4NwzDgW2VW9zIiOOBUfOkRmS78bXJuG/bGUKUIoewos1QhTWD7jiO2Do3WVlyyqqeDpuxWKqCo6tjTmu73vuQsha/q5tVZ5rtWE5JATUmM52RU+0z8JTa6Nyv8ZtZI2yhzchA6QQljpa4pI4Y6apMsecr2iY8jhncvgyNNqSRoLtDyVWDufQ+z6UlfgzTwDW2lizAYML6YP7+9K5ULMSm5gUlpQ4C9yr3r9/R4abFDINJ1ZhNnupq7jZqDLFUppFnHJMg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Fmu/aOPB15x2Ibj37jBhamj0jD4F0dYtiTD2146J7u8PTKH+A4UwwG6MhSTLSZNlgkIVvj6WxkKTChkz/aXQScwI20QR8VtkfEl43KB8dzkL01ojJfNeXdmrGwuGOoaScqNyneFiHB35013G3gVpxTfflaxLhvrt2gFXK8cD0edAj8UkdU2pvIkrYifsLI6yBII7NDZAdmIcutiyDVCgaojJ6ngb00iwVr01p9LcnHvBM0dA1RI7AnNk/s1fAVhzGYPiqLRFaZDMYM/7IVhaGch0llfY+Nc8aj7OfYl7zBHJLOkDwhXI9HDdf/K3xTS+g+ipH3jsT6f6NOqXuU3z1g==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • Delivery-date: Mon, 19 Apr 2021 12:09:46 +0000
  • Ironport-hdrordr: A9a23:yzhXWK4jq++D5NABSAPXwXiEI+orLtY04lQ7vn1ZYSd+NuSFis Gjm+ka3xfoiDAXHEotg8yEJbPoexzh3LZPy800Ma25VAfr/FGpIoZr8Jf4z1TbdxHW3tV2kZ 1te60WMrDNJHBnkMf35xS5Gd48wN+BtJuln/va0m0Fd2BXQotLhj0JbTqzOEtwWQVAGN4VFI CE4NBGujqnfh0sH76GL1MCWPXOoMCOqYnvZgQICwVixA6Fiz6p77CSKWnk4j41VTRTzbA+tV XUigCR3NTZj9iX6D/5k1XS4ZNfhcf7xrJ4avCkp8AJJlzX+2SVTat7XbnqhkFRnMiO7xIQnM DIs1McOa1Img/sV0WUhTeo5AX6yjYp7BbZuC+lqF/uu9bwSj5/K+cpv/MhTjLj50AtvM5x3c twtgrz3fonbmKzoA3H69fFTB1snEavyEBS6dI7tHBDTZAYLIZYsI13xjIlLL47ACn45Io7ed Meav302fA+SyL/U1np+kNrwNCqQ00pGAaHTkUoqqWuokZrtUE84E0CyMMFmHAcsLo7Vplf/u zBdp9ljbdUU6YtHO5ALdZEZfHyJn3GQBrKPm7XCVP7FJsfM3aIj5Ls+r066MyjZZRg9up8pL 3xFHdj8UIicUPnDsODmLdR9ArWfWm7VTPxjulD+plQoNTHNfrWGBzGbGprv9qrov0ZDMGece 20IohqD/jqKnarMZpV3jf5R4JZJRAlIYwok+d+f2jLjtPAK4XsuOCeWu3UPqDRHTEtXX66LW AEWBT1OcVc/mGmUnL1m3HqKjHQU3262ag1PLnR/uAVxoRIHJZLqBIphVOw4dzOCTAqiN1yQG JOZJfc1o+rr2i/+mjFq09zPABGM0pT6LL8F1dDpQoANVLIYa8O0u/vPVx67T+iHFtSXsnWGA lQqxBc4qSsNaGdwigkFpaBPn+FiWAQ4FaHVY0VlKHGxcqNQOJ3Mr8WHIhKUSnbHR18nghn7E 1ZbhUfe0PZHjTyzYO/jJIVA+nbX8JmgBiiJPNVrX63jzTemegfAl8gGxK+W8+ehggjAxBOgE dqzqMZiL2c3Qq0JXAHm+Q+Ol1UYGGxCLZLZT71I7l8q/TOQkVdXG2KjTuVh1UWdnDx/0sfvG DnMBaZYOrGGFZbp3Be3Jv76V8cTBTvQ2tALlRB9aFtH2XPvXh+ldWGYae+yEO9QFoPyON1Ck CPXRIiZidVg/yn3h+cnziPUUg8zpI1J+rHEfAIaLfIwE6gL4WOiIALF/JZ54xeKdjrq+MHON jvPTO9HXfdMacEygaVrnEqNG1Is3Eii+rvwwCgw26i3nIzaMCiVmhOdvU+GZW74GflTfrTj8 k8otIxoOeqMmL+LvSB0rraajZfKhXV5U66JttY3ax8jOYXjv9UGZKebB7jkFdg9z86JN3vlE wfTL9giYqxcrNHTog3QWZh4lEtlN6zN0MlvQz9P/8mcTgW/grmFuLMx4CNlKEmDUKArjbhIF Wz8yVS+PHeQiuIvIRqfJ4YECBzaEIm7m5l8/7HX4rMCB+yf+UrxivxDlaNNJtcQrOCA7Mes1 JT5MyJhfaec27d1BrLtTV2ZoJI/GDPe7L+PCu8XcpJ+ce9I1KCn++D59Oyli7+TX+DUHsj7L c1PHA4X4BkkTktjIo+zyi0ROjWmyse4iRjyAAisEXs1Iig6HrcBmdcP2Ti88xrYQU=
  • Ironport-sdr: dJRFgUpiOrHc7pkAH2pxkEMzbjZDjSQdOoeT53QwC6CVAgSw26WGactIQ+fZ7/es9aPWvbz410 kOmKnmVaEwEOpD27+gf1bu77K5/9A9HwlDAnbJ+noVC7ETBtL9akVt+knSUs0lfvvm/4QfErRE km/pTMFGMhZr0H/zm/KUzkeIf/ZvCTj5Y/fDEmr5mlwuFLsu9XDORZx5p2A+1sADS6x/6E8zCw LNrV4FA6UG/5ZgR5wLETdu6caAxdyaiNaMgMV0tuophHSlRlicaalz85hSLMKemS2wuIe8fFs9 WTc=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Apr 19, 2021 at 01:46:02PM +0200, Jan Beulich wrote:
> On 19.04.2021 11:16, Roger Pau Monné wrote:
> > Adding Paul also for the Viridian part.
> > 
> > On Fri, Apr 16, 2021 at 03:16:41PM +0200, Jan Beulich wrote:
> >> Zapping leaf data for out of range leaves is just one half of it: To
> >> avoid guests (bogusly or worse) inferring information from mere leaf
> >> presence, also shrink maximum indicators such that the respective
> >> trailing entry is not all blank (unless of course it's the initial
> >> subleaf of a leaf that's not the final one).
> >>
> >> This is also in preparation of bumping the maximum basic leaf we
> >> support, to ensure guests not getting exposed related features won't
> >> observe a change in behavior.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> ---
> >> v3: Record the actual non-empty subleaf in p->basic.raw[0x7], rather
> >>     than subleaf 0. Re-base over Viridian leaf 40000005 addition.
> >> v2: New.
> >>
> >> --- a/tools/tests/cpu-policy/test-cpu-policy.c
> >> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
> >> @@ -8,10 +8,13 @@
> >>  #include <err.h>
> >>  
> >>  #include <xen-tools/libs.h>
> >> +#include <xen/asm/x86-defns.h>
> >>  #include <xen/asm/x86-vendors.h>
> >>  #include <xen/lib/x86/cpu-policy.h>
> >>  #include <xen/domctl.h>
> >>  
> >> +#define XSTATE_FP_SSE  (X86_XCR0_FP | X86_XCR0_SSE)
> >> +
> >>  static unsigned int nr_failures;
> >>  #define fail(fmt, ...)                          \
> >>  ({                                              \
> >> @@ -553,6 +556,103 @@ static void test_cpuid_out_of_range_clea
> >>      }
> >>  }
> >>  
> >> +static void test_cpuid_maximum_leaf_shrinking(void)
> >> +{
> >> +    static const struct test {
> >> +        const char *name;
> >> +        struct cpuid_policy p;
> >> +    } tests[] = {
> >> +        {
> >> +            .name = "basic",
> >> +            .p = {
> >> +                /* Very basic information only. */
> >> +                .basic.max_leaf = 1,
> >> +                .basic.raw_fms = 0xc2,
> >> +            },
> >> +        },
> >> +        {
> >> +            .name = "cache",
> >> +            .p = {
> >> +                /* Cache subleaves present. */
> >> +                .basic.max_leaf = 4,
> >> +                .cache.subleaf[0].type = 1,
> > 
> > On a private conversation with Andrew he raised the issue that the
> > shrinking might be overly simplistic. For example if the x2APIC
> > feature bit in leaf 1 is set then the max leaf should be at least 0xb
> > in order to be able to fetch the x2APIC ID, even if it's 0.
> 
> But in such a case the "type" field of leaf 0xb's first sub-leaf is
> going to be non-zero, isn't it?

Right, as type 0 is invalid according to Intel SDM, so you will never
be able to shrink below 0xb while having x2APIC set.

I still wonder however if there's any other such dependency, where
shrinking the max cpuid leaf could force us to drop features exposed
in inferior leaves.

> > I also wonder if we are shrinking the leaves too much, for example we
> > should always report up to 0x40000000 (or 0x40000100) plus the Xen
> > leaves, as we never hide those and it's also documented in the public
> > headers?
> 
> Not sure I follow - I'm likely confused by you quoting 0x40000000
> and 0x40000100 rather than 0x400000nn and 0x400001nn, as elsewhere
> you suggested we may not want to clip sub-leaves there. Can you
> clarify whether you really mean only the first sub-leaves (each)
> here, and if so why you say "up to"? Furthermore for the Xen leaves
> I don't think I do excessive clipping ...

No, sorry, I was confused. What you do is fine, I would even (as said
in the previous patch) just report the max leaf unconditionally even
if empty, as we are not leaking any hardware state in this case.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.