|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] xentop: add support for qdisks
On Wed, 2015-03-11 at 11:51 -0600, Charles Arnold wrote:
> Now that Xen uses qdisks by default and qemu does not write out
> statistics to sysfs this patch queries the QMP for disk statistics.
>
> This patch depends on libyajl for parsing statistics returned from
> QMP. The runtime requires libyajl 2.0.3 or newer for required bug
> fixes in yajl_tree_parse().
Elsewhere we currently support libyajl1 even, which means that this is
all configure tests for.
You say bug fixes here, but the code comment says:
/* Use libyajl version 2.1.x or newer for the tree parser feature with bug
fixes */
which suggests it perhaps didn't even exist in earlier versions. Also I
note the quoted versions differ, FWIW.
Whether the interface exists (even in buggy form) or not in older
versions is important because if it doesn't exist then that would be a
build failure, which we would want to avoid.
Whereas a functional failure would perhaps be tolerable. However, given
the existing HAVE_YAJL_YAJL_VERSION_H define I think the code could
easily check if the YAJL library is good enough at compile time and stub
itself out -- i.e. not report qdisk stats if the yajl doesn't do the
job.
My second concern here is with the use of /var/run/xen/qmp-libxl-%i from
outside of libxl. I can't remember if qemu is safe against multiple
users of the socket. ISTR asking Anthony this before, but I don't recall
the answer, sorry :-(
Even if it is strictly speaking ok it seems a bit warty to do it, but
perhaps for an in-tree user like libxenstat it is tolerable.
Alternatively we could (relatively) easily arrange for their to be a
second qemp-libxenstat-%i socket, assuming the qemu overhead of a second
one is sane.
Would it be possible to include somewhere, either in a code comment or
in the changelog, an example of the JSON response to the QMP commands.
(I'm also consistently surprised by the lack of a qmp client library,
but that's not your fault!)
> diff --git a/tools/xenstat/libxenstat/src/xenstat.c
> b/tools/xenstat/libxenstat/src/xenstat.c
> index 8072a90..f3847be 100644
> --- a/tools/xenstat/libxenstat/src/xenstat.c
> +++ b/tools/xenstat/libxenstat/src/xenstat.c
> @@ -657,6 +657,24 @@ static void xenstat_uninit_xen_version(xenstat_handle *
> handle)
> * VBD functions
> */
>
> +/* Save VBD information */
> +xenstat_vbd *xenstat_save_vbd(xenstat_domain *domain, xenstat_vbd *vbd)
> +{
> + if (domain->vbds == NULL) {
> + domain->num_vbds = 1;
> + domain->vbds = malloc(sizeof(xenstat_vbd));
> + } else {
> + domain->num_vbds++;
> + domain->vbds = realloc(domain->vbds,
> + domain->num_vbds *
> + sizeof(xenstat_vbd));
> + }
FYI realloc handles the old pointer being NULL just fine, so you don't
need to special case that so long as num_vbds starts out initialised to
0.
Also, if realloc returns NULL then you need to have remembered the old
value to free it, else it gets leaked.
> @@ -477,18 +480,10 @@ int xenstat_collect_vbds(xenstat_node * node)
> continue;
> }
>
> - if (domain->vbds == NULL) {
> - domain->num_vbds = 1;
> - domain->vbds = malloc(sizeof(xenstat_vbd));
> - } else {
> - domain->num_vbds++;
> - domain->vbds = realloc(domain->vbds,
> - domain->num_vbds *
> - sizeof(xenstat_vbd));
> - }
Oh, I see my comments above were actually on the old code you were
moving.
> + /* Use libyajl version 2.1.x or newer for the tree parser feature with
> bug fixes */
> + if ((info = yajl_tree_parse((char *)qmp_stats, errbuf, sizeof(errbuf)))
> == NULL) {
You don't want to log something using errbuf? If not then it may as well
be as small as possible.
> + /* Use libyajl version 2.0.3 or newer for the tree parser feature */
> + if ((info = yajl_tree_parse((char *)stats_buf, errbuf, sizeof(errbuf)))
> == NULL)
Likewise.
> +/* Get all the active domains */
actually only up to 1024 of them ;-)
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |