[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 08/12] libs/guest: rework xc_cpuid_xend_policy


  • To: Andrew Cooper <amc96@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 19 Jan 2022 11:45:30 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LYrW1CN/BFtNfd52IQQkrGsT3O2O+i9L+EpTHOjbg8g=; b=HwuqruZ4V/TdzYdtPPx1IjEDuzC2n/uKIXv/N/R0Xpb/gXZdVcaP30acJSUR8vFHgXMp7uWJ0kfbmE0xQmyisIr4MRTf0qsf7PQ+E0zY4nEvZ5twLVfqGd5rsIeJuSaRZuH/eh/PcdExfZluP3V/bx74cqcnqsDCx2My4dUP5Rnp6eLhBT5qDT+gnFlFNtTatdmC5pCyVBUdVmj0q5TCgpCa+tLfxt/gOCbUnEptibdE27ZI+lIyE8c3gRNneToOb1+SzWQdxI9KFFFh5t3MWwjmc9M+GfxY8KBGzKCVO1GTE0PJpGRjXQ1T6CXX/92k7UmbY/x/lIEkQrHg85Vlug==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WfSO+7+19vXhqFwXfaraM/yCDkQD4WLHur28hscrQw34Ju4xce1vY6fLYrCcIimBJBU6xtfJvQ/49YQT5aYFZm0TGg1C5FNojNAJ0649eh6ItyjYqHDtYZUHqPl2LZONhYm1vyOPgbrGiRDGG+AN2CeN+4WfnU9JWHUyb28pdMU/7KlCkD7XLLCvWthjLOMhPZK0L87hAyJnTN/Mi1BUPI/x6Wo9eTNE7yHc8J5W+Zfj6vSy2cPUpU5xszLNEKWUn0v/Cp0A0oD3m5IjTTz3weC6jBsyKb0K9ADJOap/sPxnbOrOs2IJkly77Q1esNNYnRIFx3Tv2XMRQLn10MCARQ==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Wed, 19 Jan 2022 10:45:57 +0000
  • Ironport-data: A9a23:1tQ2XqPBslefyffvrR1SkMFynXyQoLVcMsEvi/4bfWQNrUpz0TwCn 2IWWmyEOP3YMzT3Kt92Po22p0lSvJ/cnIVnSgto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h ynLQoCYdKjYdpJYz/uUGuCJQUNUjMlkfZKhTr6UUsxNbVU8En150Esyw7dRbrNA2rBVPSvc4 bsenOWHULOV82Yc3rU8sv/rRLtH5ZweiRtA1rAMTakjUGz2zhH5OKk3N6CpR0YUd6EPdgKMq 0Qv+5nilo/R109F5tpICd8XeGVSKlLZFVDmZna7x8FOK/WNz8A/+v9TCRYSVatYoxWEh4FL8 cpNjMKxVxoxP/fzofsgdzANRkmSPYUekFPGCX22sMjVxEzaaXr8hf5pCSnaP6VBpLwxWzsXs 6VFdnZdNXhvhMrvqF6/YvNrick5atHiIasUu216zCGfBvEjKXzGa/uQv4YHhG1r7ixINcrmS O0+ewIxVUnvYB1LEU0OVq8mjs790xETdBUH8QnI9MLb+VP70whZwLXrdt3PdbSiR8pPmV2Dj nnb5Gm/CRYfXPSWzDHUrFq3nOTB2yX2ROo6BLC+s/JnnlCX7mgSEwENE0u2p+GjjUyzUM4ZL FYbkhfCtoBrqhbtFIOkGUTl/jjU5XbwRua8DcUa5Rnc8JPL4z/HB3YZVj8bV+U5iesfEGlCO kCyo/vlAjlmsbuwQH2b96uJoT7aBRX5PVPudgdfE1JbvoCLTJUby0uWE409SPLdYsjdQGmom 1i3QD4Ca6L/ZCLh/4Gy5hj5jj2lvfAlpSZlt1yMDgpJAu6UDbNJhrBEC3CHvJ6sz67DFzFtW UTofeDEvIji6rnXxUSwrB0lRu3B2hp8GGS0baRTN5cg7S+x3HWoYJpd5jpzTG8wbJpeJGSwP BWM51kIjHO2AJdMRfUnC25WI557pZUM6Py/DqyEBjawSsUZmPC7ENFGOhfLgjGFfLkEmqAjI 5aLGftA/l5BYZmLOAGeHr9HuZdyn3hW7TqKGfjTkkr7uZLDOi/9YepVYTOmM7FihIvZ8Vq9z jqqH5bQo/mpeLegMnC/HE96BQ1iEEXX8rit+pMHLbDSc1M2cIzjYteIqY4cl0Vet/09vs/D/ 22nW18ez1z6hHbdLh6NZGwlY7TqNauTZ1pgVcD1FVr3iXUlf6i166ITK8k+cbU9rbQxxv9oV fgVPc6HB60XGDjA/j0ca7j7rZBjK0v31V7fYXL9bWhtZYNkSizI5sTgIlnl+h4RA3flrsA5u bChiF/WGMJRWwR4Ac/KQ/uz1Fft72MFked/UhKQcNlecUnh6qZwLCn1gqNlKs0AM0yblDCby xyXEVETouyU+90599zAhKalqYa1ErQhQhoGTjeDtbvvbHvU5Guux4NEQd2kRzGFWTOm4rima MVU0+r4bK8NkmFVvtcuCL1s168/uYfi/ucI0gR+EXzXRF23Ebc8cGKe1MxCu6ARlL9UvQy6B hCG9tVAYOjbPcrkFBgaJRY/b/TF3vYRw2GA4fMwKUT8xSl24LvYDhkCY0jS0HRQfOlvLYco4 eY9o8pHuQWwhy0jPsuCki0JpX+HKWYNUvl/u5wXaGMxZtHHFr2WjUTgNxLL
  • Ironport-hdrordr: A9a23:TVAqkKNKgnhVG8BcT1v155DYdb4zR+YMi2TDiHoedfUFSKOlfp 6V8MjztSWVtN4QMEtQ/+xoHJPwPE80kqQFnbX5XI3SJjUO3VHIEGgM1/qG/9SNIVybygcZ79 YeT0EcMqyBMbEZt7eD3ODQKb9Jq7PrgcPY59s2jU0dNj2CA5sQnjuRYTzra3GeKjM2YqbQQ/ Gnl7R6TnebCD4qR/X+IkNAc/nIptXNmp6jSRkaByQ/4A3LqT+z8rb1HzWRwx9bClp0sPsf2F mAtza8yrSosvm9xBOZ/2jP765OkN+k7tdYHsSDhuUcNz2poAe1Y4ZKXaGEoVkO0aySwWdvtO OJjwYrPsx15X+UVmapoSH10w2l6zoq42+K8y7QvVLT5ejCAB4qActIgoxUNjHD7VA7gd162K VXm0qEqpt+F3r77WXAzumNcysvulu/oHIkn+JWpWdYS5EiZLhYqpFa1F9JEa0HADnx5OkcYa dT5fnnlbVrmG6hHjLkVjEF+q3oYp1zJGbIfqE6gL3U79AM90oJi3fxx6Qk7wE9HdwGOt55Dt //Q9ZVfYd1P7grhJJGdZQ8qPSMexnwqDL3QSqvyAfcZeo600ykke+C3Fxy3pDtRKA1
  • Ironport-sdr: vaAa96CUTzFj0zu838nvT4nmUQEwVEScoIxg+fNaGmfJhMNn6iEJQcmYJaL8ew+7C5gz5Ds63x 2SfNdTFHgR/EYOsvRlNdNzV0WEVwtXI49i5xhZxwCmHfCAW+GqcHQ99mhDqKCAxk3OEJfULHm8 hqk/ELd4g4Ri8o/tJP0UJ6qyfEJ/4yeqaqX7DBN1YmEqEICn8426dsgI3YZuGbdttK5YWagw9O Or0XWU1a9cO5IJ5ybhcVXQr/fhKhDhFzxoQrHu3CTmVpflTW2f+QcMtJfql7BBZcvfzzCS2q61 ToW6byOxCVfbXh68QLoB0tv7
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Jan 18, 2022 at 02:06:25PM +0000, Andrew Cooper wrote:
> On 17/01/2022 09:48, Roger Pau Monne wrote:
> > Rename xc_cpuid_xend_policy to xc_cpu_policy_apply_cpuid
> 
> I'm not convinced by this name.  'xend' is important because it's the
> format of the data passed, which 'cpuid' is not.

The format would be the libxc format now. Having xend here is a
layering violation IMO.

Note this function goes away in patch 11, so I'm unsure there's a lot
of value in discussing over the name of a function that's about to
disappear.

> It is a horribly inefficient format, and really doesn't want to survive
> cleanup to the internals of libxl.

Even when moved to the internals of libxl the format is not exposed to
users of libxl (it's a libxl__cpuid_policy in libxl_internal.h), so we
are free to modify it at our own will. I would defer the changes to
the format to a separate series, there's enough churn here already. My
aim was to provide a new interface while keeping the functional
changes to a minimum.

> > diff --git a/tools/libs/guest/xg_cpuid_x86.c 
> > b/tools/libs/guest/xg_cpuid_x86.c
> > index e7ae133d8d..9060a2f763 100644
> > --- a/tools/libs/guest/xg_cpuid_x86.c
> > +++ b/tools/libs/guest/xg_cpuid_x86.c
> > @@ -254,144 +254,107 @@ int xc_set_domain_cpu_policy(xc_interface *xch, 
> > uint32_t domid,
> >      return ret;
> >  }
> >  
> > -static int compare_leaves(const void *l, const void *r)
> > -{
> > -    const xen_cpuid_leaf_t *lhs = l;
> > -    const xen_cpuid_leaf_t *rhs = r;
> > -
> > -    if ( lhs->leaf != rhs->leaf )
> > -        return lhs->leaf < rhs->leaf ? -1 : 1;
> > -
> > -    if ( lhs->subleaf != rhs->subleaf )
> > -        return lhs->subleaf < rhs->subleaf ? -1 : 1;
> > -
> > -    return 0;
> > -}
> > -
> > -static xen_cpuid_leaf_t *find_leaf(
> > -    xen_cpuid_leaf_t *leaves, unsigned int nr_leaves,
> > -    const struct xc_xend_cpuid *xend)
> > -{
> > -    const xen_cpuid_leaf_t key = { xend->leaf, xend->subleaf };
> > -
> > -    return bsearch(&key, leaves, nr_leaves, sizeof(*leaves), 
> > compare_leaves);
> > -}
> > -
> > -static int xc_cpuid_xend_policy(
> > -    xc_interface *xch, uint32_t domid, const struct xc_xend_cpuid *xend)
> > +int xc_cpu_policy_apply_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
> > +                              const struct xc_xend_cpuid *cpuid, bool hvm)
> >  {
> >      int rc;
> > -    xc_dominfo_t di;
> > -    unsigned int nr_leaves, nr_msrs;
> > -    uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
> > -    /*
> > -     * Three full policies.  The host, default for the domain type,
> > -     * and domain current.
> > -     */
> > -    xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
> > -    unsigned int nr_host, nr_def, nr_cur;
> > +    xc_cpu_policy_t *host = NULL, *def = NULL;
> >  
> > -    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
> > -         di.domid != domid )
> > -    {
> > -        ERROR("Failed to obtain d%d info", domid);
> > -        rc = -ESRCH;
> > -        goto fail;
> > -    }
> > -
> > -    rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
> > -    if ( rc )
> > -    {
> > -        PERROR("Failed to obtain policy info size");
> > -        rc = -errno;
> > -        goto fail;
> > -    }
> > -
> > -    rc = -ENOMEM;
> > -    if ( (host = calloc(nr_leaves, sizeof(*host))) == NULL ||
> > -         (def  = calloc(nr_leaves, sizeof(*def)))  == NULL ||
> > -         (cur  = calloc(nr_leaves, sizeof(*cur)))  == NULL )
> > -    {
> > -        ERROR("Unable to allocate memory for %u CPUID leaves", nr_leaves);
> > -        goto fail;
> > -    }
> > -
> > -    /* Get the domain's current policy. */
> > -    nr_msrs = 0;
> > -    nr_cur = nr_leaves;
> > -    rc = get_domain_cpu_policy(xch, domid, &nr_cur, cur, &nr_msrs, NULL);
> > -    if ( rc )
> > +    host = xc_cpu_policy_init();
> > +    def = xc_cpu_policy_init();
> > +    if ( !host || !def )
> >      {
> > -        PERROR("Failed to obtain d%d current policy", domid);
> > -        rc = -errno;
> > -        goto fail;
> > +        PERROR("Failed to init policies");
> > +        rc = -ENOMEM;
> > +        goto out;
> >      }
> >  
> >      /* Get the domain type's default policy. */
> > -    nr_msrs = 0;
> > -    nr_def = nr_leaves;
> > -    rc = get_system_cpu_policy(xch, di.hvm ? 
> > XEN_SYSCTL_cpu_policy_hvm_default
> > +    rc = xc_cpu_policy_get_system(xch, hvm ? 
> > XEN_SYSCTL_cpu_policy_hvm_default
> >                                             : 
> > XEN_SYSCTL_cpu_policy_pv_default,
> > -                               &nr_def, def, &nr_msrs, NULL);
> > +                                  def);
> >      if ( rc )
> >      {
> > -        PERROR("Failed to obtain %s def policy", di.hvm ? "hvm" : "pv");
> > -        rc = -errno;
> > -        goto fail;
> > +        PERROR("Failed to obtain %s def policy", hvm ? "hvm" : "pv");
> > +        goto out;
> >      }
> >  
> >      /* Get the host policy. */
> > -    nr_msrs = 0;
> > -    nr_host = nr_leaves;
> > -    rc = get_system_cpu_policy(xch, XEN_SYSCTL_cpu_policy_host,
> > -                               &nr_host, host, &nr_msrs, NULL);
> > +    rc = xc_cpu_policy_get_system(xch, XEN_SYSCTL_cpu_policy_host, host);
> >      if ( rc )
> >      {
> >          PERROR("Failed to obtain host policy");
> > -        rc = -errno;
> > -        goto fail;
> > +        goto out;
> >      }
> >  
> >      rc = -EINVAL;
> > -    for ( ; xend->leaf != XEN_CPUID_INPUT_UNUSED; ++xend )
> > +    for ( ; cpuid->leaf != XEN_CPUID_INPUT_UNUSED; ++cpuid )
> >      {
> > -        xen_cpuid_leaf_t *cur_leaf = find_leaf(cur, nr_cur, xend);
> > -        const xen_cpuid_leaf_t *def_leaf = find_leaf(def, nr_def, xend);
> > -        const xen_cpuid_leaf_t *host_leaf = find_leaf(host, nr_host, xend);
> > +        xen_cpuid_leaf_t cur_leaf;
> > +        xen_cpuid_leaf_t def_leaf;
> > +        xen_cpuid_leaf_t host_leaf;
> >  
> > -        if ( cur_leaf == NULL || def_leaf == NULL || host_leaf == NULL )
> > +        rc = xc_cpu_policy_get_cpuid(xch, policy, cpuid->leaf, 
> > cpuid->subleaf,
> > +                                     &cur_leaf);
> 
> This is a crazy increase in data shuffling.
> 
> Use x86_cpuid_get_leaf() from patch 2, which *is* find_leaf() for
> objects rather than lists, and removes the need to further-shuffle the
> data later now that you're not modifying cur in place.

This function will get moved into libxl in patch 11, and then it would
be a layering violation to call x86_cpuid_get_leaf, so it needs to use
the xc wrapper.

> It also removes the churn introduced by changing the indirection of
> these variables.
> 
> It also removes all callers of xc_cpu_policy_get_cpuid(), which don't
> have an obvious purpose to begin with.  Relatedly,
> xc_cpu_policy_get_msr() has no users at all, that I can see.

This was introduced in order to provide a sensible interface. It
doesn't make sense to allow getting cpuid leafs but not MSRs from a
policy IMO. I would expect other toolstacks, or libxl in the future to
make use of it.

If you don't think that's enough justification we can leave that patch
aside.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.