[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 10/12] libs/{light,guest}: implement xc_cpuid_apply_policy in libxl


  • To: Andrew Cooper <amc96@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 19 Jan 2022 14:37:38 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WRYcuvoEpQZ1z62aG4uQGDfsmnSdz9orQ1fFOYkXY3I=; b=VgQMW2Bgb3lVPnMFR6AuNQVIhLFjwvlTNiZpNrmXtiXWWFh1JSqCPJ5uabCrSxWzrntU4zHfDvfzXXaZJ9YcrQ97ZuDD1s8zko6HOWZT3z5uzSUcSz9BZsF5jBVCzddmVcqSFH5GDdiVimQHZgUg+P9v6vBmJbBAmlIkEOLI2v+fgENZvLIGuiYwc4QhNzMekgLXvURmN1K6AHRNxrmdysfWua8bKMg3n4m/rce+yBq0eM7EfBmou7Nei+8dc3pSLq8rbnoMV8WyichX/6CIVnPP4b3jtWs8T313GX0gKnJf/qeEhUn8Pad3cMNfdiuFuQK8CSIwcsaIgS9N21+8WA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IFAPAfYl5tDcsO5s1w/HgWUy0/+qtSsfLmK45kD6XZivJ6BpR6LLXZ7qz8nZFJ+So0g015qKxKjzVcihyZWjjbH+R23I1kuexeH/IOAhymr/E8h+0H4IfW4vexRpk6u5ccVso/+oK8/vbkRonsHX+E8ZL60odn7ep1sGC4NmRy/zCE48TkxHFF2NaUwrLJXoSypoWjtbM5o338u3SRXR3YDrQZOktjtUf1y6A5Gg6EWYO62DRX6MsmGGNE6xuV383JKL8e0syc/xQ0CvcWSU0y9kl8QbMlICBRl3vdFOxn/SZdV/mPHXVpEgS1HgqUAwMR4HEejgG5ZZ1DANQjM5vw==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Wed, 19 Jan 2022 13:38:03 +0000
  • Ironport-data: A9a23:LZaHXK0fcpZkHgjJ+vbD5SR2kn2cJEfYwER7XKvMYLTBsI5bpzIGm zQcWTrVP/fbZmf9e40lYY+/oE8PsJWGndAxQFdqpC1hF35El5HIVI+TRqvS04J+DSFhoGZPt Zh2hgzodZhsJpPkS5PE3oHJ9RGQ74nRLlbHILOCanAZqTNMEn9700o6wr9h2+aEvPDia++zk YKqyyHgEAfNNw5cagr4PIra9XuDFNyr0N8plgRWicJj5TcypFFMZH4rHomjLmOQf2VhNrXSq 9Avbl2O1jixEx8FUrtJm1tgG6EAaua60QOm0hK6V0U+6/TrS+NbPqsTbZIhhUlrZzqhgv5Ai 95t76aMZzh3A7PqsdUZQR1CHHQrVUFG0OevzXmXtMWSywvNcmf2wuUoB0YzVWEa0r8pWycUr 6VecW1TKEDY7w616OvTpu1EnMMsIdOtJIoCknph0SvYHbAtRpWrr6Diu4UChmhs3Z8m8fD2O JcBZmphdQn5cTJ+P3EGDpkHx/jviSyqG9FfgA3M/vdmi4TJ9yRu1JD9PdyTfcaFLe1XkVyfv Xnu5HniD1cRM9n34TOA+SPyrvTVli29Xo8OfJWo+/gvjFCNy2g7DBwNSUD9sfS/klS5Wd9UN woT4CVGkEQp3BX1FJ+nBUT++SPa+E5HMzZNLwEkwByj++nfw1yjOkY7EANfTvUXsN0yXzN/g zdlgOjVLTBotbSUT1eU+bGVsS6+NEApEIMSWcMXZVBbuoe++enfmjqKF48+S/Dt0rUZDBmpm 2jSxBXSkYn/miLiO0+T2VncywyhqZHSJuLezlWGBzn1hu+ViWPMWmBJ1bQ5xaoRRGp6ZgPY1 JThpyR4xLpeZX1qvHbcKNjh5Jnzu5643MT02DaD5aUJ+TW34GKEdotN+jx4L0oBGp9aJWW0O xaP41wPvsY70J6WgUlfOdLZ5yMCl/mIKDgYfqqMMoomjmZZKWdrAx2ClWbPhjuwwSDAYIk0O IuBcNbEMJrpIf8P8dZCfM9EieVD7nlnnQv7HMmnpzz6j+b2TCPLGN8tbQvfBshkvfjsiFiEr L5i2z6ilk83vBvWOHeHqOb+7DkicBAGOHwBg5cGKb7YfFs3QTFJ5j246epJRrGJVp99z4/g1 nq8RlVZ2Bz4g3jGIh+NcXdtdPXkWpMXkJ7xFXVE0Y+A1ydxbICxwr0YcpdrL7Ar+PY6lax/T uUfetXGCfNKE2yV9zMYZJj7jYpjaBX02l7eY3v7OGAyL8x6WgjE2t74ZQ+zpiMAOTW66Jklq Lq62wKFHZdaH1Z+DNzbYe6Exk+quSRPg/p7WkbFe4EBeEjl/IVwBTb2i/s7f5MFJRnZn2PI3 AeKGxYI4+LKptZtotXOgKmFqaavEvd/QRUGTzWKs+7uOHCDrGS5wIJGXOKZRhznVTv5qPe4e OFY7/DgK/lbzlxEhJVxTuRwxqUk6tqx+7IDllZ4HG/GZkiAA697JiXUxtFGs6BAy+MLuQayX U7TqNBWNa/QZZHgGV8VYgEkcv6CxbcfnTyLtaY5J0Dz5SlW+rubUBoNY0nQ2XIFdLYlYpk4x eoBudIN713tgxUnBd+KkyRI+jneNXcHSagm6skXDYKDZtDHEb2ejUgw0hPL3aw=
  • Ironport-hdrordr: A9a23:lScdm6yf9gVrccpnjbSpKrPwKL1zdoMgy1knxilNoHtuA6ulfq GV7ZAmPHrP4wr5N0tNpTntAsa9qBDnlaKdg7N+AV7KZmCP0gaVxepZjLfK8nnNHDD/6/4Y9Y oISdkaNDQoNykYsS8t2njbL+od
  • Ironport-sdr: fOSgpSQUrrUNMTxkLBvEvsj30pIXKiO/q8x/uD29g7OcfS5bRGk5qWF+iB9DLNie6UXPFA5Bcx NRWHG2NhvCpihuJrosJ51BYY7QVzjJdP8ZmcltOArTb/zIRaLp4MtyOxX7/9Hf7LowbdpOUUpz SEMtdMqTf3nu9ipz2H1luC5hQ8oHRw2DcQsJsH6Jw5JANaSAlIO1F/ukwt1Nnf3D7mjfThwKBV tCG3kmSyedYUJbUSVNBUmT9K+Y7qXgtbumfrNpyoZNGIZEqYtYuXpTxMLSLrGILBU+AO59gQ6e tuVqIMHh+LcUzQ/cVGdZBVth
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Jan 18, 2022 at 01:37:18PM +0000, Andrew Cooper wrote:
> On 17/01/2022 09:48, Roger Pau Monne wrote:
> > diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
> > index e1acf6648d..7dcdb35a4c 100644
> > --- a/tools/libs/light/libxl_cpuid.c
> > +++ b/tools/libs/light/libxl_cpuid.c
> > @@ -454,6 +456,41 @@ int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t 
> > domid, bool restore,
> >       */
> >      bool nested_virt = info->nested_hvm.val > 0;
> >  
> > +    policy = xc_cpu_policy_init();
> > +    if (!policy) {
> > +        LOGED(ERROR, domid, "Failed to init CPU policy");
> > +        rc = ERROR_FAIL;
> > +        goto out;
> > +    }
> > +
> > +    r = xc_cpu_policy_get_domain(ctx->xch, domid, policy);
> > +    if (r) {
> > +        LOGED(ERROR, domid, "Failed to fetch domain CPU policy");
> > +        rc = ERROR_FAIL;
> > +        goto out;
> > +    }
> > +
> > +    if (restore) {
> > +        /*
> > +         * Make sure the policy is compatible with pre Xen 4.13. Note that
> > +         * newer Xen versions will pass policy data on the restore stream, 
> > so
> > +         * any adjustments done here will be superseded.
> > +         */
> > +        r = xc_cpu_policy_make_compat_4_12(ctx->xch, policy, hvm);
> 
> One hidden host policy acquisition, and ...
> 
> > +        if (r) {
> > +            LOGED(ERROR, domid, "Failed to setup compatible CPU policy");
> > +            rc = ERROR_FAIL;
> > +            goto out;
> > +        }
> > +    }
> > +
> > +    r = xc_cpu_policy_legacy_topology(ctx->xch, policy, hvm);
> 
> ... one host featureset and ... (final comment)
> 
> > +    if (r) {
> > +        LOGED(ERROR, domid, "Failed to setup CPU policy topology");
> > +        rc = ERROR_FAIL;
> > +        goto out;
> > +    }
> > +
> >      /*
> >       * For PV guests, PAE is Xen-controlled (it is the 'p' that 
> > differentiates
> >       * the xen-3.0-x86_32 and xen-3.0-x86_32p ABIs).  It is mandatory as 
> > Xen
> > @@ -466,6 +503,13 @@ int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t 
> > domid, bool restore,
> >       */
> >      if (info->type == LIBXL_DOMAIN_TYPE_HVM)
> >          pae = libxl_defbool_val(info->u.hvm.pae);
> > +    rc = libxl_cpuid_parse_config(&info->cpuid, GCSPRINTF("pae=%d", pae));
> > +    if (rc) {
> > +        LOGD(ERROR, domid, "Failed to set PAE CPUID flag");
> > +        rc = ERROR_FAIL;
> > +        goto out;
> > +    }
> > +
> >  
> >      /*
> >       * Advertising Invariant TSC to a guest means that the TSC frequency 
> > won't
> > @@ -481,14 +525,50 @@ int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t 
> > domid, bool restore,
> >       */
> >      itsc = (libxl_defbool_val(info->disable_migrate) ||
> >              info->tsc_mode == LIBXL_TSC_MODE_ALWAYS_EMULATE);
> > +    rc = libxl_cpuid_parse_config(&info->cpuid, GCSPRINTF("invtsc=%d", 
> > itsc));
> > +    if (rc) {
> > +        LOGD(ERROR, domid, "Failed to set Invariant TSC CPUID flag");
> > +        rc = ERROR_FAIL;
> > +        goto out;
> > +    }
> >  
> > -    r = xc_cpuid_apply_policy(ctx->xch, domid, restore, NULL, 0,
> > -                              pae, itsc, nested_virt, info->cpuid);
> > -    if (r)
> > -        LOGEVD(ERROR, -r, domid, "Failed to apply CPUID policy");
> > +    /* Set Nested virt CPUID bits for HVM. */
> > +    if (hvm) {
> > +        rc = libxl_cpuid_parse_config(&info->cpuid, GCSPRINTF("vmx=%d",
> > +                                                              
> > nested_virt));
> > +        if (rc) {
> > +            LOGD(ERROR, domid, "Failed to set VMX CPUID flag");
> > +            rc = ERROR_FAIL;
> > +            goto out;
> > +        }
> > +
> > +        rc = libxl_cpuid_parse_config(&info->cpuid, GCSPRINTF("svm=%d",
> > +                                                              
> > nested_virt));
> > +        if (rc) {
> > +            LOGD(ERROR, domid, "Failed to set SVM CPUID flag");
> > +            rc = ERROR_FAIL;
> > +            goto out;
> > +        }
> > +    }
> 
> libxl_cpuid_parse_config() is a large overhead, and cannot suffer errors
> because libxl crashes rather than failing memory allocations.  The
> invasiveness would be reduced substantially by having:
> 
> str = GCSPRINTF("pae=%d,invtsc=%d", pae, itsc);
> if ( hvm )
>     append GCSPRINTF("vmx=%d,svm=%d", nested_virt, nested_virt);
> 
> and a single call to libxl_cpuid_parse_config().
> 
> 
> Depending on how much we care, these need inserting at the head of the
> info->cpuid list so they get processed in the same order as before.
> 
> > +
> > +    /* Apply the bits from info->cpuid if any. */
> 
> 'if any' is stale now.
> 
> > +    r = xc_cpu_policy_apply_cpuid(ctx->xch, policy, info->cpuid, hvm);
> 
> ... and both a host and default policy.
> 
> That is a lot of overhead added behind the scenes.  It would be rather
> better to have this function obtain the host policy and pass it to all 3
> helpers.  Default policy is harder to judge, but it would avoid needing
> to pass an `hvm` boolean down into this helper.

I've fixed all the above, but haven't added a default policy parameter
to xc_cpu_policy_apply_cpuid. Let me know how strongly you feel about
that.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.