[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] xen/arm: update the docs about heterogeneous computing



Hi Stefano,

On 15/02/18 23:17, Stefano Stabellini wrote:
Update the documentation of the hmp-unsafe option to explain how to use
it safely, together with the right cpu affinity setting, on big.LITTLE
systems.

Also update the warning message to point users to the docs.

Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
CC: jbeulich@xxxxxxxx
CC: konrad.wilk@xxxxxxxxxx
CC: tim@xxxxxxx
CC: wei.liu2@xxxxxxxxxx
CC: andrew.cooper3@xxxxxxxxxx
CC: George.Dunlap@xxxxxxxxxxxxx
CC: ian.jackson@xxxxxxxxxxxxx

---
  docs/misc/xen-command-line.markdown | 10 +++++++++-
  xen/arch/arm/smpboot.c              |  9 +++++----
  2 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown 
b/docs/misc/xen-command-line.markdown
index 2184cb9..a1ebeea 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1007,7 +1007,15 @@ Control Xens use of the APEI Hardware Error Source 
Table, should one be found.
Say yes at your own risk if you want to enable heterogenous computing
  (such as big.LITTLE). This may result to an unstable and insecure
-platform. When the option is disabled (default), CPUs that are not
+platform, unless you manually specify the cpu affinity of all domains so
+that all vcpus are scheduled on the same class of pcpus (big or LITTLE
+but not both). vcpu migration between big cores and LITTLE cores is not
+supported. Thus, if the first 4 pcpus are big and the last 4 are LITTLE,
+all domains need to have either cpus = "0-3" or cpus = "4-7" in their VM
+config. Moreover, dom0_vcpus_pin needs to be passed on the Xen command
+line.

In your example here you suggest to have all the vCPUs of a guest to either on big or LITTLE cores. How about giving an example where the guest can have 2 LITTLE vCPUs and one big vCPU?

+
+When the hmp-unsafe option is disabled (default), CPUs that are not
  identical to the boot CPU will be parked and not used by Xen.
### hpetbroadcast
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 7ea4e41..20c1b4a 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -265,7 +265,7 @@ void __init smp_init_cpus(void)
if ( opt_hmp_unsafe )
          warning_add("WARNING: HMP COMPUTING HAS BEEN ENABLED.\n"
-                    "It has implications on the security and stability of the 
system.\n");

I would still like to keep that line in the warning. Maybe with an "unless" after.

+                    "Make sure to pass dom0_vcpus_pin, and specify the cpu affinity 
of all domains.\n");
  }
int __init
@@ -306,13 +306,14 @@ void start_secondary(unsigned long boot_phys_offset,
      /*
       * Currently Xen assumes the platform has only one kind of CPUs.
       * This assumption does not hold on big.LITTLE platform and may
-     * result to instability and insecure platform. Better to park them
-     * for now.
+     * result to instability and insecure platform (unless cpu affinity
+     * is manually specified for all domains). Better to park them for
+     * now.
       */
      if ( !opt_hmp_unsafe &&
           current_cpu_data.midr.bits != boot_cpu_data.midr.bits )
      {
-        printk(XENLOG_ERR "CPU%u MIDR (0x%x) does not match boot CPU MIDR 
(0x%x).\n",
+        printk(XENLOG_ERR "CPU%u MIDR (0x%x) does not match boot CPU MIDR (0x%x), 
disable cpu. See hmp-unsafe.\n",

I am a bit reluctant to give the option in the message. It is a way for them to enable without looking at the documentation. Indeed it is quite obvious from the name to know hmp-unsafe is a boolean.

                 smp_processor_id(), current_cpu_data.midr.bits,
                 boot_cpu_data.midr.bits);
          stop_cpu();


Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.