[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 12/17] x86/CPUID: shrink max_{,sub}leaf fields according to actual leaf contents


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 15 Apr 2021 14:30:40 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RpBiGRW58QxtP6gqTw0pLfZYB9sWumOIfXiyVFUVVRs=; b=dd+I61jXPIodFii8JaGPzJIh+SXtCRiq8yE11XqjH9D9s4AY3CmwesIScutInOuGGUbsni/LgT6YmuVdWiCysACZUTEDliNPEeoOlRGKJAu+DF4YlxK8eCXci2i8N5G2Q19rgob2895KnxfLMMUyNfHdjhyIY3gF3qtmdtzwL7a4bS9nq3h9lJRR3PNNjEjiWZTYc+kX7ivXJpEXRRxqJoD9vkB95AUG5bQkajY9JNqh6x+Q6DVm/5c4bClJhRf6HGE1nI2jH6iSDuxZEDYR3dQnPSE1HdQ7QwNYDcZxnVC7Bef0yFdjlkk2DKUxsJ47RExtnDSUUauOsjHwsXEDQQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z2Nx6dVMiRAgyOVXjTAUenM9eoaLPvXQfyOIVwd9Q4TUgo1VcT3HrvTsNJh4oTssJ3MhM3eUbuPUIERVpTvQXV7pqJSLQlNU9F0UEw3jK3wnw8Ydv2qGhF2eEZYS8lwdqxIQ4HnA+Kzx2r4wHcd7wnUgrPud/FBmcK+yAGqO7GmhmPgd9hCDAnPka9++m0MoFxa4PbvBbd+Yxx0bwNf/ZTPTcIQZ35n/p93ZwUk01ZqzYOhEgd/RH979h49tRSBfz4Qj3eekqbmd7M+2p9e+6MVGsq1Z4eRcqloprjYMC269Ydx+YHoZN7GGI0cWSE0iEwr0WPi7diIfiLKHbcr1nA==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • Delivery-date: Thu, 15 Apr 2021 12:30:52 +0000
  • Ironport-hdrordr: A9a23:fEGhy6ytmR86uncEvOh2KrPxnu4kLtp033Aq2lEZdDV8Sebdv9 yynfgdyB//gCsQXnZlotybJKycWxrnm6JdybI6eZOvRhPvtmftFoFt6oP+3ybtcheQysd07o 0lSaR3DbTLYWRSpdrm4QW+DtYryMSG9qftvuvF03JxV2hRC51IxS0RMHf8LmRdQg5aCZ0lUL ed/NNAvTq8eXIRB/7Le0Utde7FutHNidbaehYAHREq802jijmv5b78HXGjr2sjehlIxqov9n WArhzh6syYwouG4zL/90uW1ZRZn9P91sBObfbstuE5Iijh4zzYAbhJdKaFuFkO0YWSwXYs1O LBuhIxe/l0gkmhAV2dhTvI903e3C0163nkoGXo8kfLhcDiXjo1B45gqOtiA2PkwnEttt19z6 5Htljx3/E8YGKi7UaNkuTgbB1kmlG5pnAvi4co/gdieLATdaNLqsgn9F5Vea1wbB7S0pwtE+ VlEajnlY9rWG6dBkqp2VVH8ZiHW3Q+GQq+WU4SusCZ+Cg+pgEJ82IogOMYhXsO75Q7Vt1t4P nFKL1hkPV0QtYRdr8VPpZPfeKHTkj2BT7cOmObJlrqUIkBJnL2spbypJE4/vujdpAkxIY78a 6xHm9whCoXQQbDGMeO1JpE/lTmW2OmRwngzclY+txQpqD8bKCDC1zBdHke1++b59kPCMzSXP i+fLhMBeX4EGfoEYFVmyXjRphpL2UEWsF9gKd6Z3u+5ubwbqH6vO3Sd/jeYJD3Fyw/Z2/5Cn wfGBfpIsFt6V2qR2/YjBDdV2iFQD27wbtAVIzhu8QDwokEMYNB9iIPj06i282NITpe9ow6FX EOZI/Po+eeny2b7GzI52JmNl52FUBO+ojtVHtMuEsvO0PwerAThsWHdQlprTy6Dy46a/mTPB 9Uplxx967yBYeX3zoeB9WuNX/fqHcPunSQTdM5lreY7cnoPrM0Z6xWGZBZJEHuLVhYiAxqoG BMZEsvXUnEDA7jjq2jkdgzH+HQd951hS+xOs5KoXfjtUGRzPtfBEczbnqLa4q6kAwuTz1bih la6KkEmoeNnj6pNC8CmugiCUZNb26WGbpCKwyAaOxv6/bWUTA1aV3PqS2Rihk1dGav00kJnG TuIReZfuzxDkNHtmpV1bvr911IZnyQFngAGExSgMlYLyDrq3xz2eiEau6I32ydZkAr78sdPD vGCAFiaD9G9pSS7lq4iTyCHXIpytESJeTbFq0kaKyW8GiqMpe0maYPGOJ08J5pOMv1iPICVf uSdmauXWrFItJs/zbQimcuOSFypnVhrOjh3wf96nOkmFE4GvjfLT1dNvgmCuDZy1KhYfmG0J 90141o+cSxN3j8cd6Ax+X8aSVZJhbavG6xSKUJpPlvzNUPnYo2O6Oedz3CkExj9lEZCuzfkU sFWqR14LzbIOZUDocvUhMc2mBsrciFKUsgjxf/DeA/d2w8lnOzBaL835P47Z4URnCbrAT+OV Oj4zRQ0vfMUSyEz6MbAcsLUBJrQXl5zHRp5+WZcYLMTC2sauFY5VK/W0XNPYN1eeygGb8KqA x97MzNt+iLdzDg0ASVmTdgOKpB/yKGRsy1aTj8VdJgwpifOV6WhLGt79P2pDDrSSGjY0BdvL Z7TyUrH4x+owhnqpY23Ci0QrH2pUxgs2I220AYqnfdnq684GnaGklaNxb+mZs+Z0gKDkS1
  • Ironport-sdr: W02RO8fTsQUGweKhxN8QP6HuihXV3PmqmtzDQBQ6BjRBxRbAJi/2Xkuitj+p4iMSRd3qlWnIuP 1X/bHx4PB7x+I+NcPCffGl8BhW9fb39BbJ3ZnB43d0MibNiHDi1gzYmjWMgLRvMqmqBK9oOMfK NBGHOtATZXw/hgiR0EUcXEOUNyXOoBDNn4ngNoHczzuA4DjmlqVn+fnXT2gQP9FBkCAPW6+fCF UtkWnN6ylc1rzqDmLkFctrmiZqPuAEUZUgmZGk3OhCrNFNOp2ExiB6XeR3/pwSgc/Tq4EOOv1Q ys8=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Apr 15, 2021 at 12:37:20PM +0200, Jan Beulich wrote:
> On 15.04.2021 11:52, Roger Pau Monné wrote:
> > On Mon, Nov 23, 2020 at 03:33:03PM +0100, Jan Beulich wrote:
> >> --- a/tools/tests/cpu-policy/test-cpu-policy.c
> >> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
> >> @@ -8,10 +8,13 @@
> >>  #include <err.h>
> >>  
> >>  #include <xen-tools/libs.h>
> >> +#include <xen/asm/x86-defns.h>
> >>  #include <xen/asm/x86-vendors.h>
> >>  #include <xen/lib/x86/cpu-policy.h>
> >>  #include <xen/domctl.h>
> >>  
> >> +#define XSTATE_FP_SSE  (X86_XCR0_FP | X86_XCR0_SSE)
> > 
> > This gets used only once...
> > 
> >> +
> >>  static unsigned int nr_failures;
> >>  #define fail(fmt, ...)                          \
> >>  ({                                              \
> >> @@ -564,6 +567,103 @@ static void test_cpuid_out_of_range_clea
> >>      }
> >>  }
> >>  
> >> +static void test_cpuid_maximum_leaf_shrinking(void)
> >> +{
> >> +    static const struct test {
> >> +        const char *name;
> >> +        struct cpuid_policy p;
> >> +    } tests[] = {
> >> +        {
> >> +            .name = "basic",
> >> +            .p = {
> >> +                /* Very basic information only. */
> >> +                .basic.max_leaf = 1,
> >> +                .basic.raw_fms = 0xc2,
> >> +            },
> >> +        },
> >> +        {
> >> +            .name = "cache",
> >> +            .p = {
> >> +                /* Cache subleaves present. */
> >> +                .basic.max_leaf = 4,
> >> +                .cache.subleaf[0].type = 1,
> >> +            },
> >> +        },
> >> +        {
> >> +            .name = "feat#0",
> >> +            .p = {
> >> +                /* Subleaf 0 only with some valid bit. */
> >> +                .basic.max_leaf = 7,
> >> +                .feat.max_subleaf = 0,
> >> +                .feat.fsgsbase = 1,
> >> +            },
> >> +        },
> >> +        {
> >> +            .name = "feat#1",
> >> +            .p = {
> >> +                /* Subleaf 1 only with some valid bit. */
> >> +                .basic.max_leaf = 7,
> >> +                .feat.max_subleaf = 1,
> >> +                .feat.avx_vnni = 1,
> >> +            },
> >> +        },
> >> +        {
> >> +            .name = "topo",
> >> +            .p = {
> >> +                /* Topology subleaves present. */
> >> +                .basic.max_leaf = 0xb,
> >> +                .topo.subleaf[0].type = 1,
> >> +            },
> >> +        },
> >> +        {
> >> +            .name = "xstate",
> >> +            .p = {
> >> +                /* First subleaf always valid (and then non-zero). */
> >> +                .basic.max_leaf = 0xd,
> >> +                .xstate.xcr0_low = XSTATE_FP_SSE,
> > 
> > ...here.
> 
> For now, yes. I'm introducing the constant because I think it wants
> using in other places too, to avoid using literal numbers. See e.g.
> 
>                 .xstate.xcr0_low = 7,
> 
> in test_cpuid_serialise_success().
> 
> > And then I also wonder whether this requires having any
> > specific values rather than just using ~0 or any random non-0 value.
> 
> I'm afraid I don't understand: There's no ~0 here and no random
> non-zero value - all other structure elements are left default-
> initialized.

Oh, I've worded that sentence wrongly I think. What I meant to say is
that for the purposes of the test here you could just fill the fields
with ~0 instead of using things like XSTATE_FP_SSE?

> >> --- a/xen/arch/x86/hvm/viridian/viridian.c
> >> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> >> @@ -121,7 +121,9 @@ void cpuid_viridian_leaves(const struct
> >>      switch ( leaf )
> >>      {
> >>      case 0:
> >> -        res->a = 0x40000006; /* Maximum leaf */
> >> +        /* Maximum leaf */
> >> +        cpuid_viridian_leaves(v, 0x40000006, 0, res);
> >> +        res->a = res->a | res->b | res->c | res->d ? 0x40000006 : 
> >> 0x40000004;
> > 
> > I think you would need to adjust this chunk to also take into account
> > leaf 0x40000005 now.
> 
> Hmm, yes, looks like I failed to take note that I need to re-base
> over that addition.
> 
> > I also wonder whether we should actually limit HyperV leaves. I think
> > it's perfectly fine to report up to the maximum supported by Xen, even
> > if it turns out none of the advertised feat are present, as in: Xen
> > supports those leaves, but none of the features exposed are
> > available.
> 
> Well, if the Viridian maintainers (I realize I failed to Cc Paul on the
> original submission) think I should leave the Viridian leaves alone
> (rather than handling consistently with other leaves), I can drop this
> part of the change.

As I understand this is change is partially motivated to avoid leaking
the hardware max number of leaves when not required. With Viridian
it's all software based, so we are not leaking any hardware details
AFAICT, and hence I would be fine with just using a fixed value.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.