[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: make the dom0_max_vcpus option more flexible



On 04/05/12 17:26, David Vrabel wrote:
> On 04/05/12 17:12, Jan Beulich wrote:
>>>>> On 04.05.12 at 18:01, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>>> From: David Vrabel <david.vrabel@xxxxxxxxxx>
>>>
>>> The dom0_max_vcpus command line option only allows the exact number of
>>> VCPUs for dom0 to be set.  It is not possible to say "up to N VCPUs
>>> but no more than the number physically present."
>>>
>>> Add min: and max: prefixes to the option to set a minimum number of
>>> VCPUs, and a maximum which does not exceed the number of PCPUs.
>>>
>>> For example, with "dom0_max_vcpus=min:4,max:8":
>>
>> Both "...max...=min:..." and "...max...=max:" look pretty odd to me;
>> how about simply allowing a range along with a simple number (since
>> negative values make no sense, omitting either side of the range would
>> be supportable if necessary.
> 
> I was copying the way dom0_mem worked but yeah, it's not very pretty.
> 
> Is dom0_max_vcpus=<min>-<max> (e.g., dom0_max_vcpus=4-8)  what you were
> thinking of?
> 
> Using a single value would have to set both <min> and <max> or the
> behaviour of the option changes (i.e., =N is the same as =N-N).

This is what I ended up with.

8<------------------------------
>From af1543965db76ab81139de7f072a7c4daf61157f Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@xxxxxxxxxx>
Date: Fri, 4 May 2012 16:09:52 +0100
Subject: [PATCH] x86: make the dom0_max_vcpus option more flexible

The dom0_max_vcpus command line option only allows the exact number of
VCPUs for dom0 to be set.  It is not possible to say "up to N VCPUs
but no more than the number physically present."

Allow a range for the option to set a minimum number of VCPUs, and a
maximum which does not exceed the number of PCPUs.

For example, with "dom0_max_vcpus=4-8":

    PCPUs  Dom0 VCPUs
     2      4
     4      4
     6      6
     8      8
    10      8

Existing command lines with "dom0_max_vcpus=N" still work as before.

Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
---
 docs/misc/xen-command-line.markdown |   29 +++++++++++++++++++++--
 xen/arch/x86/domain_build.c         |   43 +++++++++++++++++++++++++---------
 2 files changed, 57 insertions(+), 15 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown 
b/docs/misc/xen-command-line.markdown
index a6195f2..4e4f713 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -272,10 +272,33 @@ Specify the bit width of the DMA heap.
 
 ### dom0\_ioports\_disable
 ### dom0\_max\_vcpus
-> `= <integer>`
 
-Specify the maximum number of vcpus to give to dom0.  This defaults
-to the number of pcpus on the host.
+Either:
+
+> `= <integer>`.
+
+The number of VCPUs to give to dom0.  This number of VCPUs can be more
+than the number of PCPUs on the host.  The default is the number of
+PCPUs.
+
+Or:
+
+> `= <min>-<max>` where `<min>` and `<max>` are integers.
+
+Gives dom0 a number of VCPUs equal to the number of PCPUs, but always
+at least `<min>` and no more than `<max>`.  Using `<min>` may give
+more VCPUs than PCPUs.  `<min>` or `<max>` may be omitted and the
+defaults of 1 and unlimited respectively are used instead.
+
+For example, with `dom0_max_vcpus=4-8`:
+
+     Number of
+  PCPUs | Dom0 VCPUs
+   2    |  4
+   4    |  4
+   6    |  6
+   8    |  8
+  10    |  8
 
 ### dom0\_mem (ia64)
 > `= <size>`
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index b3c5d4c..686b626 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -82,20 +82,39 @@ static void __init parse_dom0_mem(const char *s)
 }
 custom_param("dom0_mem", parse_dom0_mem);
 
-static unsigned int __initdata opt_dom0_max_vcpus;
-integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
+static unsigned int __initdata opt_dom0_max_vcpus_min = 1;
+static unsigned int __initdata opt_dom0_max_vcpus_max = UINT_MAX;
+
+static void __init parse_dom0_max_vcpus(const char *s)
+{
+    if (*s == '-')              /* -M */
+        opt_dom0_max_vcpus_max = simple_strtoul(s + 1, &s, 0);
+    else {                      /* N, N-, or N-M */
+        opt_dom0_max_vcpus_min = simple_strtoul(s, &s, 0);
+        if (*s++ == '\0')       /* N */
+            opt_dom0_max_vcpus_max = opt_dom0_max_vcpus_min;
+        else if (*s != '\0')    /* N-M */
+            opt_dom0_max_vcpus_max = simple_strtoul(s, &s, 0);
+    }
+}
+custom_param("dom0_max_vcpus", parse_dom0_max_vcpus);
 
 struct vcpu *__init alloc_dom0_vcpu0(void)
 {
-    if ( opt_dom0_max_vcpus == 0 )
-        opt_dom0_max_vcpus = num_cpupool_cpus(cpupool0);
-    if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
-        opt_dom0_max_vcpus = MAX_VIRT_CPUS;
+    unsigned max_vcpus;
+
+    max_vcpus = num_cpupool_cpus(cpupool0);
+    if ( opt_dom0_max_vcpus_min > max_vcpus )
+        max_vcpus = opt_dom0_max_vcpus_min;
+    if ( opt_dom0_max_vcpus_max < max_vcpus )
+        max_vcpus = opt_dom0_max_vcpus_max;
+    if ( max_vcpus > MAX_VIRT_CPUS )
+        max_vcpus = MAX_VIRT_CPUS;
 
-    dom0->vcpu = xzalloc_array(struct vcpu *, opt_dom0_max_vcpus);
+    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
     if ( !dom0->vcpu )
         return NULL;
-    dom0->max_vcpus = opt_dom0_max_vcpus;
+    dom0->max_vcpus = max_vcpus;
 
     return alloc_vcpu(dom0, 0, 0);
 }
@@ -185,11 +204,11 @@ static unsigned long __init compute_dom0_nr_pages(
     unsigned long max_pages = dom0_max_nrpages;
 
     /* Reserve memory for further dom0 vcpu-struct allocations... */
-    avail -= (opt_dom0_max_vcpus - 1UL)
+    avail -= (d->max_vcpus - 1UL)
              << get_order_from_bytes(sizeof(struct vcpu));
     /* ...and compat_l4's, if needed. */
     if ( is_pv_32on64_domain(d) )
-        avail -= opt_dom0_max_vcpus - 1;
+        avail -= d->max_vcpus - 1;
 
     /* Reserve memory for iommu_dom0_init() (rough estimate). */
     if ( iommu_enabled )
@@ -883,10 +902,10 @@ int __init construct_dom0(
     for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
         shared_info(d, vcpu_info[i].evtchn_upcall_mask) = 1;
 
-    printk("Dom0 has maximum %u VCPUs\n", opt_dom0_max_vcpus);
+    printk("Dom0 has maximum %u VCPUs\n", d->max_vcpus);
 
     cpu = cpumask_first(cpupool0->cpu_valid);
-    for ( i = 1; i < opt_dom0_max_vcpus; i++ )
+    for ( i = 1; i < d->max_vcpus; i++ )
     {
         cpu = cpumask_cycle(cpu, cpupool0->cpu_valid);
         (void)alloc_vcpu(d, i, cpu);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.