[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 11/21] libs/guest: allow updating a cpu policy CPUID data


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 31 Mar 2021 14:47:42 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Fqe121CWTkA1QpsXwlGVx3sF9YsG+h9QMxHFoPiNb/A=; b=kgT8eq2FSEwTL+c1YOQDnIWysfLn/UF015jqF0rs4JceKBu7zLCtl7UqE8w3hMnkh+mxj+QjnS5auJYEOPgIVh0YCbgsdm8whO9U3SJUt6ET4GzstzI+GQEM94g3q2DR2B0RcuBq7XgpYKk7eqmmMhB1kXIVK7xx/j0K+ZkPV5Wm9z3BdAckNyOyIVxg2suGOYKlJMrnP4jWPN3dF64FG1G4QvUu1Mayc1EL4Q2Epcpf5PeIx+NdyFikbdVXk3Iz8gsVl7BkNH8J/ArsIeP91Jhzx3vjuK/bizBT4cGtCr0TRvXmLTldJN0V1UuASm69yPrMqiLfAwx7VOQAnv5TAg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lWwyLFkU/oiYQB5ntuW8YkrlsE0rR/LyctL1kExYmSw8q4VybpSFNRsrwRUKDG4I91HFoxMc5XLg1MQ43pAMOoS+dnZSCnCEwdMTFtXDDOg36aJNz2kRGsFEOCnSBo0y+cX+ZVWEVz7oCx7s3Y1D8D2s7F9mxDycnZECckQGXMhWNFTTZ3jqREG8i6pKJ694wwOQ1xKDyp3ee3OAGSSlzrJvNu2RkyheLbxRSoVRYK6eJq+ygf6oTsjgp5SCoFtT23Zu/Ecs07nAncx8Uo0fT0NUVZJAFsDDDmwkao45bKzmfAdNN8gfZLraGszxrgpmtI2AW84Ijq/OKn3WL/w10g==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 31 Mar 2021 12:47:57 +0000
  • Ironport-hdrordr: A9a23:iDbA6a8x9ZfKDLrMvupuk+FLcL1zdoIgy1knxilNYDRvWIixi9 2ukPMH1RX9lTYWXzUalcqdPbSbKEmzybdc2qNUGbu5RgHptC+TLI9k5Zb/2DGIIULD38Zn/+ Nbf6B6YeedMXFTkdv67A6kE9wp3dmA9+SSif3Dymp2JDsLV4hLxW5Ce2CmO2dxQxRLAod8OZ qH/8xcpyehf3N/VLXHOlAuWe/fq9rX0K/3eBJuPW9c1CCirxONrIT7HR+RwwsEX1p0oIsK3G DZn2XCl8Cemt6hzBu07R620713n5/bxsJHFIiwjKEuW0nRoyKJQKgkZLGYpjAyp4iUmSYXuf 3BuQ0pMcg2y165RBDMnTLX1wPt0Ckj5hbZoDfy6xaTwr2aNUAHIvFMio5DfhzS51BIhqAG7I tx03+ErJ0SNBvcnU3Glqn1fit3nUm5q2dKq59rs1VjV+IlGdhshL1a1kZUHJAcJTn9+YAqHc J/Zfusmsp+QBe0aWvUsXJox8HpdnMvHg2eSkxHgcCN1SNK9UoJg3cw9Yg6pDMt5Zg9Q55L66 DtNblprqhHSoszYbhmDOkMbMOrAgX2MF3xGVPXBW6iOLAMOnrLpZKyyq4y/vuWdJsBy4Z3sI jdUXtD3FRCO37GOImr5tlm4xrNSGKyUXDG0cdF/aV0vbX6Wf7CLTCDclYziMGtys9vQ/Hzar KWAtZ7EvXjJWzhFcJixAvlQaRfLnEYTYk7odA+d1WSot/aC4Hju+DBGcyjZobFIHIBYCfSE3 EDVD/8KIFr9UawQEL1hxDXRjfMdyXEjNdNOZmf29JW5JkGN4VKvARQo0++/Nu3JTpLtbFzWE N/Jbjgg56qvGXexxeN00xZfj5mSmpF6rTpVH1H4SUQNVnvTLoFs9KDPUdfwWWAPR06a8/NCg ZQqxBW9MuMXtKt7BFnL+jiHnORjnMVqn7PZYwbgLe/6cDsfY59KI0nVqx3HQDiDAd0hg5ulW dGZGY/NwziPwKrrZ/goI0fBenZedU5qhysO9RopXXWsljZmdsiXUIBXzmlUdeehCEnQzY8vC w3z4YvxJ673Rq/I2o2h+o1dHlBcn6eDr59AAOZX4lMgbzwdAZsTWCFuCyCh3gICxnX3nRXol akATyfePnNDFYYgHxe36rw2H5fd2mWfStLGztHmLw4MV6Dlmd40OeNaKb27nCYbUEaxPoBdB vfZyEJHw9oz9er9RKclTqYD086zpE2MuG1NsVkT5jjnleWbKGYn6APGPFZuKt/PNf1q+kRTK axfRSWID6QMZJe5yWl4lIefA96p3kvnam2hFnL7G2k0GU+BvSXClJ8XL0fK8yd6W+hZ/vg6u QMsfsF+c+LdkP2YZq67IuSSRhpABbau3S3QOElsoo8h9N7iJJDW73gFQLV33RG1igkJMj6lE kiUL12iYqxT7NHTog3QWZl5VInm9SEEVszviH3CuE4e0sxj3WzBaL/35P47Z4uCFaGvg3+JB 229DBc5e7MW0K4pPQnIpN1BWRdc04n7nt+uMuEao3LEQ2vM8VO5kCzPHP4ULhTTsG+aPgthy c/x9GDhOmMcSXknCjWoDtgO6pLt1+dfvnaOnPEJcd4t/qgOVqNhaO24Mm8yBfPIAHLFHgwtM libkwfbsNKlz84qpY4uxLCE5DKng==
  • Ironport-sdr: 9NCmvY16RvpIrz/dKQsCQRTESVSxoLjz7iev3Rj1V+1hDWm5Gz6IvRZQ+VoD6TvJpKeResixny dNrVAEYwFhhFaC6hnmmO306Fxb/4KSQRmjbpcS8eGpfXbniZ7Ykshnsp2J0H3yGVn4qkWZG2YG TlaVk1/uJFAGAUdmnZxpABuFJ8JRaZ3nZx6iR/fDtMYV9P+KUK9NQCVOIn3pb5MeiawyB9+QLl qVB9IogANntZ48IEwRX1l2X+tD+4T9+HO5jiUjAihYWpL+gPlEhZnlYNxBa0bEyKFdKkOsDbp+ B9I=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Mar 30, 2021 at 05:56:35PM +0200, Jan Beulich wrote:
> On 23.03.2021 10:58, Roger Pau Monne wrote:
> > --- a/tools/libs/guest/xg_cpuid_x86.c
> > +++ b/tools/libs/guest/xg_cpuid_x86.c
> > @@ -966,3 +966,70 @@ int xc_cpu_policy_get_msr(xc_interface *xch, const 
> > xc_cpu_policy_t policy,
> >      free(msrs);
> >      return rc;
> >  }
> > +
> > +int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t policy,
> > +                               const xen_cpuid_leaf_t *leaves,
> > +                               uint32_t nr)
> > +{
> > +    unsigned int err_leaf = -1, err_subleaf = -1;
> > +    unsigned int nr_leaves, nr_msrs, i, j;
> > +    xen_cpuid_leaf_t *current;
> > +    int rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
> > +
> > +    if ( rc )
> > +    {
> > +        PERROR("Failed to obtain policy info size");
> > +        return -1;
> > +    }
> > +
> > +    current = calloc(nr_leaves, sizeof(*current));
> > +    if ( !current )
> > +    {
> > +        PERROR("Failed to allocate resources");
> > +        errno = ENOMEM;
> > +        return -1;
> > +    }
> > +
> > +    rc = xc_cpu_policy_serialise(xch, policy, current, &nr_leaves, NULL, 
> > 0);
> > +    if ( rc )
> > +        goto out;
> > +
> > +    for ( i = 0; i < nr; i++ )
> > +    {
> > +        const xen_cpuid_leaf_t *update = &leaves[i];
> > +
> > +        for ( j = 0; j < nr_leaves; j++ )
> > +            if ( current[j].leaf == update->leaf &&
> > +                 current[j].subleaf == update->subleaf )
> > +            {
> > +                /*
> > +                 * NB: cannot use an assignation because of the const vs
> > +                 * non-const difference.
> > +                 */
> > +                memcpy(&current[j], update, sizeof(*update));
> 
> I'm having trouble understanding the comment. In
> 
>     current[j] = *update;
> 
> the lvalue is xen_cpuid_leaf_t and the rvalue is const xen_cpuid_leaf_t.
> That the usual (and permitted) arrangement afaics.

I'm sure I was doing something really stupid, and as a bonus I failed
to properly parse the error message I got from the compiler. It's now
fixed here and below.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.