[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 7/7] tools: add tools support for Intel CAT



On Thu, Mar 19, 2015 at 12:51:19PM +0000, Ian Campbell wrote:
> On Thu, 2015-03-19 at 18:41 +0800, Chao Peng wrote:
> > This is the xc/xl changes to support Intel Cache Allocation
> > Technology(CAT). Two commands are introduced:
> > - xl psr-cat-cbm-set [-s socket] <domain> <cbm>
> >   Set cache capacity bitmasks(CBM) for a domain.
> > - xl psr-cat-show <domain>
> >   Show Cache Allocation Technology information.
> > 
> > Signed-off-by: Chao Peng <chao.p.peng@xxxxxxxxxxxxxxx>
> > ---
> >  tools/libxc/include/xenctrl.h |  15 +++
> >  tools/libxc/xc_psr.c          |  74 ++++++++++++++
> 
> At the libxc level this looks like a reasonable wrapping of the
> hypercall interface, so if the hypervisor folks are happy with that then
> I'm happy with this.
> 
> >  tools/libxl/libxl.h           |  18 ++++
> 
> At the libxl level things seem to be rather opaque to me, i.e. what is
> domain_data? what does it contain? does it have any more semantics than
> a 64-bit number?

The 64-bit number now holds the cache bitmask (CBM) for the domain. In
the future, the number may represent the memory bandwidth throttling for
the domain. So use a neutral name here.

> 
> What future new values might there be for libxl_psr_cat_type?

As far as I can see, besides L3_CBM, libxl_psr_cat_type may be L2_CBM or
MEM_BANDWIDTH_THROTTLING in the future.

> 
> >  tools/libxl/libxl_psr.c       | 107 ++++++++++++++++++--
> >  tools/libxl/libxl_types.idl   |   4 +
> >  tools/libxl/xl.h              |   4 +
> >  tools/libxl/xl_cmdimpl.c      | 225 
> > ++++++++++++++++++++++++++++++++++++++++--
> >  tools/libxl/xl_cmdtable.c     |  13 +++
> 
> You've missed updating the manpage.

Yep, thanks.

> 
> >  8 files changed, 445 insertions(+), 15 deletions(-)
> > 
> 
> > +#ifdef LIBXL_HAVE_PSR_CAT
> > +int libxl_psr_cat_set_domain_data(libxl_ctx *ctx, uint32_t domid,
> > +                                  libxl_psr_cat_type type, uint32_t target,
> > +                                  uint64_t data);
> > +int libxl_psr_cat_get_domain_data(libxl_ctx *ctx, uint32_t domid,
> > +                                  libxl_psr_cat_type type, uint32_t target,
> > +                                  uint64_t *data_r);
> > +int libxl_psr_cat_get_l3_info(libxl_ctx *ctx, uint32_t socket,
> > +                              uint32_t *cos_max_r, uint32_t *cbm_len_r);
> > +#endif
> > +
> 
> 
> > [...]
> > +static uint32_t get_phy_socket_num(void)
> 
> I think this would be better named "count_physical_sockets" or
> something, otherwise I might think it takes a lock.
> 
> > +{
> > +    int rc;
> > +    uint32_t nr_sockets;
> 
>        uint32_t nr_sockets = 0;
> 
> > +    libxl_physinfo info;
> > +    libxl_physinfo_init(&info);
> > +    rc = libxl_get_physinfo(ctx, &info);
> 
> Then:
>        if (!rc)
>            nr_sockets = info.nr_cpus / info.threads_per_core / 
> info.cores_per_socket;
>       
>        libxl_physinfo_dispose(&info);
>        return nr_sockets;
> 
> means you only have one return path to do clean up on.

Thanks, I will adopt your suggestion.

> 
> Also, does this function need to be in an ifdef, since it isn't called
> unless LIBXL_HAVE_... is set? (IOW: does this compile for ARM)

libxl_get_physinfo is for all archs, so I think it compiles for ARM. 

> 
> 
> > @@ -8057,6 +8070,202 @@ int main_psr_cmt_show(int argc, char **argv)
> >  }
> >  #endif
> >  
> > +#ifdef LIBXL_HAVE_PSR_CAT
> > +static int psr_cat_l3_cbm_set(uint32_t domid, uint32_t socket, uint64_t 
> > cbm)
> > +{
> > +    int rc;
> > +    uint32_t i, nr_sockets;
> > +
> > +    if (socket != ALL_SOCKETS) {
> > +        return libxl_psr_cat_set_domain_data(ctx, domid,
> > +                                             LIBXL_PSR_CAT_TYPE_L3_CBM,
> > +                                             socket, cbm);
> > +    } else {
> > +        nr_sockets = get_phy_socket_num();
> 
> I wonder if the libxl API ought to allow for an "all sockets" argument
> and then do all this internally instead of making the callers do it?

I can of course. But I just think of the API semantic consistence for
future, the "target" may be cpu but not socket if we support L2_CBM, so
will that value still represent all cpus? If this is not the problem,
then I will do it.

> 
> > +        if (nr_sockets == 0) {
> > +            fprintf(stderr, "Failed getting physinfo\n");
> > +            return -1;
> > +        }
> > +        for (i = 0; i < nr_sockets; i++) {
> > +            rc = libxl_psr_cat_set_domain_data(ctx, domid,
> > +                                               LIBXL_PSR_CAT_TYPE_L3_CBM,
> > +                                               i, cbm);
> > +    /* Total L3 cache size */
> > +    printf("%-46s", "Total L3 Cache Size");
> 
> How wide are these lines going to be? Can we try and keep it to <80
> columns? Perhaps you could include an example of the output of each of
> the show functions in the commit message?

As socket number is variable, it's hard to strictly ensure <80 if all
the sockets displayed in one line. Perhaps socket by socket display is
the right direction.

Chao
> 
> > +    for (socketid = 0; socketid < nr_sockets; socketid++) {
> > +        rc = libxl_psr_cmt_get_l3_cache_size(ctx, socketid, 
> > &l3_cache_size);
> > +        if (rc < 0) {
> > +            fprintf(stderr, "Failed to get system l3 cache size for 
> > socket:%d\n",
> > +                            socketid);
> > +            return -1;
> > +        }
> > +        printf("%13u KB", l3_cache_size);
> > +    }
> > +    printf("\n");
> > +

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.