[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 4/4] xl: 'xl info' print outstanding claims if enabled (claim_mode=1 in xl.conf)



This patch provides the value of the currently outstanding pages
claimed for all domains. This is a total global value that influences
the hypervisors' MM system.

When a claim call is done, a reservation for a specific amount of pages
is set and also a global value is incremented. This global value is then
reduced as the domain's memory is populated and eventually reaches zero.
The toolstack can also choose to set the domain's claim to zero which
cancels the reservation and decrements the global value by the amount
of claim that has not been satisfied.

If the reservation cannot be meet the guest creation fails immediately
instead of taking seconds or minutes (depending on the size of the guest)
while the toolstack populates memory.

See patch: "xl: Implement XENMEM_claim_pages support via 'claim_mode'
global config" for details on how it is implemented.

The value fluctuates quite often so the value is stale once it is provided
to the user-space.  However it is useful for diagnostic purposes.

It is only printed when the global "claim_mode" option in xl.conf(5)
is set to enabled.

[v1: s/unclaimed/outstanding/]
[v2: Made libxl_get_claiminfo return just MemKB suggested by Ian]
[v3: Made libxl_get_claininfo return MemMB to conform to the other values 
printed]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
---
 tools/libxl/libxl.c      | 12 ++++++++++++
 tools/libxl/libxl.h      |  1 +
 tools/libxl/xl_cmdimpl.c | 24 ++++++++++++++++++++++++
 3 files changed, 37 insertions(+)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index f9ed235..e9942c5 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4054,6 +4054,18 @@ libxl_numainfo *libxl_get_numainfo(libxl_ctx *ctx, int 
*nr)
     return ret;
 }
 
+uint64_t libxl_get_claiminfo(libxl_ctx *ctx)
+{
+    long l;
+
+    l = xc_domain_get_outstanding_pages(ctx->xch);
+    if (l < 0)
+        return l;
+
+    /* In MB */
+    return (l >> 8);
+}
+
 const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
 {
     union {
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index c4ad58b..5dab24b 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -579,6 +579,7 @@ int libxl_wait_for_free_memory(libxl_ctx *ctx, uint32_t 
domid, uint32_t memory_k
 /* wait for the memory target of a domain to be reached */
 int libxl_wait_for_memory_target(libxl_ctx *ctx, uint32_t domid, int 
wait_secs);
 
+uint64_t libxl_get_claiminfo(libxl_ctx *ctx);
 int libxl_vncviewer_exec(libxl_ctx *ctx, uint32_t domid, int autopass);
 int libxl_console_exec(libxl_ctx *ctx, uint32_t domid, int cons_num, 
libxl_console_type type);
 /* libxl_primary_console_exec finds the domid and console number
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 85fc401..ce3a8e1 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -4637,6 +4637,29 @@ static void output_topologyinfo(void)
     return;
 }
 
+static void output_claim(void)
+{
+    long l;
+
+    /*
+     * Note that the xl.c (which calls us) has already read from the
+     * global configuration the 'claim_mode' value.
+     */
+    if (!global_claim_mode)
+        return;
+
+    l = libxl_get_claiminfo(ctx);
+    if (l < 0) {
+        fprintf(stderr, "libxl_get_claiminfo failed. errno: %d (%s)\n",
+                errno, strerror(errno));
+        return;
+    }
+
+    printf("outstanding_claims     : %ld\n", l);
+
+    return;
+}
+
 static void print_info(int numa)
 {
     output_nodeinfo();
@@ -4647,6 +4670,7 @@ static void print_info(int numa)
         output_topologyinfo();
         output_numainfo();
     }
+    output_claim();
 
     output_xeninfo();
 
-- 
1.8.0.2


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.