[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 2/3] xen/x86: add dom0 memory sizing variants



Today the memory size of dom0 can be specified only in terms of bytes
(either an absolute value or "host-mem - value"). When dom0 shouldn't
be auto-ballooned this requires nearly always a manual adaption of the
Xen boot parameters to reflect the actual host memory size.

Add more possibilities to specify memory sizes. Today we have:

dom0_mem= List of ( min:<size> | max:<size> | <size> )

with <size> being a positive or negative size value (e.g. 1G).

Modify that to:

dom0_mem= List of ( min:<sz> | max:<sz> | <sz> )
<sz>: <size> | [<size>+]<frac>%
<frac>: integer value < 100

With the following semantics:

<frac>% specifies a fraction of host memory size in percent.
<sz> is a percentage of host memory plus an offset.

So <sz> being 1G+25% on a 256G host would result in 65G.

Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
---
 docs/misc/xen-command-line.markdown | 21 ++++++++++++++-------
 xen/arch/x86/dom0_build.c           | 36 +++++++++++++++++++++++++++++++-----
 2 files changed, 45 insertions(+), 12 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown 
b/docs/misc/xen-command-line.markdown
index 9028bcde2e..e471d32404 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -759,17 +759,17 @@ Set the amount of memory for the initial domain (dom0). 
It must be
 greater than zero. This parameter is required.
 
 ### dom0\_mem (x86)
-> `= List of ( min:<size> | max:<size> | <size> )`
-
+> `= List of ( min:<sz> | max:<sz> | <sz> )`
+ 
 Set the amount of memory for the initial domain (dom0). If a size is
 positive, it represents an absolute value.  If a size is negative, it
 is subtracted from the total available memory.
 
-* `<size>` specifies the exact amount of memory.
-* `min:<size>` specifies the minimum amount of memory.
-* `max:<size>` specifies the maximum amount of memory.
+* `<sz>` specifies the exact amount of memory.
+* `min:<sz>` specifies the minimum amount of memory.
+* `max:<sz>` specifies the maximum amount of memory.
 
-If `<size>` is not specified, the default is all the available memory
+If `<sz>` is not specified, the default is all the available memory
 minus some reserve.  The reserve is 1/16 of the available memory or
 128 MB (whichever is smaller).
 
@@ -777,13 +777,20 @@ The amount of memory will be at least the minimum but 
never more than
 the maximum (i.e., `max` overrides the `min` option).  If there isn't
 enough memory then as much as possible is allocated.
 
-`max:<size>` also sets the maximum reservation (the maximum amount of
+`max:<sz>` also sets the maximum reservation (the maximum amount of
 memory dom0 can balloon up to).  If this is omitted then the maximum
 reservation is unlimited.
 
 For example, to set dom0's initial memory allocation to 512MB but
 allow it to balloon up as far as 1GB use `dom0_mem=512M,max:1G`
 
+> `<sz>` is: `<size> | [<size>+]<frac>%`
+> `<frac>` is an integer < 100
+
+* `<frac>` specifies a fraction of host memory size in percent.
+
+So `<sz>` being `1G+25%` on a 256 GB host would result in 65 GB.
+    
 If you use this option then it is highly recommended that you disable
 any dom0 autoballooning feature present in your toolstack. See the
 _xl.conf(5)_ man page or [Xen Best
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index e34022a9b8..6929b204ef 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -24,15 +24,20 @@ static long __initdata dom0_nrpages;
 static long __initdata dom0_min_nrpages;
 static long __initdata dom0_max_nrpages = LONG_MAX;
 
-static char __initdata dom0_mem_par[64];
+static char __initdata dom0_mem_par[256];
 
 /*
  * dom0_mem=[min:<min_amt>,][max:<max_amt>,][<amt>]
- * 
+ *
  * <min_amt>: The minimum amount of memory which should be allocated for dom0.
  * <max_amt>: The maximum amount of memory which should be allocated for dom0.
  * <amt>:     The precise amount of memory to allocate for dom0.
- * 
+ *
+ * The format of <min_amt>, <max_amt> and <amt> is as follows:
+ * <size> | <frac>% | <size>+<frac>%
+ * <size> is a size value like 1G (1 GByte), <frac> is percentage of host
+ * memory (so 1G+10% means 10 percent of host memory + 1 GByte).
+ *
  * Notes:
  *  1. <amt> is clamped from below by <min_amt> and from above by available
  *     memory and <max_amt>
@@ -41,7 +46,7 @@ static char __initdata dom0_mem_par[64];
  *  4. If <amt> is not specified, it is calculated as follows:
  *     "All of memory is allocated to domain 0, minus 1/16th which is reserved
  *      for uses such as DMA buffers (the reservation is clamped to 128MB)."
- * 
+ *
  * Each value can be specified as positive or negative:
  *  If +ve: The specified amount is an absolute value.
  *  If -ve: The specified amount is subtracted from total available memory.
@@ -50,7 +55,28 @@ static unsigned long __init parse_amt(const char *s, const 
char **ps,
                                       unsigned long avail)
 {
     unsigned int minus = (*s == '-') ? 1 : 0;
-    unsigned long pages = parse_size_and_unit(s + minus, ps) >> PAGE_SHIFT;
+    unsigned long val, pages = 0;
+
+    /* Avoid accessing s[-1] in case value starts with '%'. */
+    if ( *s == '%' )
+        return 0;
+
+    s += minus;
+    while ( isdigit(*s) )
+    {
+        val = parse_size_and_unit(s, ps);
+        s = *ps;
+        if ( *s == '%' && isdigit(*(s - 1)) && val < (100 << 10) )
+        {
+            pages += (val >> 10) * avail / 100;
+            s++;
+        }
+        else
+            pages += val >> PAGE_SHIFT;
+        if ( *s == '+' )
+            s++;
+    }
+    *ps = s;
 
     /* Negative specification means "all memory - specified amount". */
     return minus ? avail - pages : pages;
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.