[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH RFC 1/2] xen: credit2: flexible configuration of runqueues



The idea is to give user more flexibility to configure runqueue further.
For most workloads and in most systems, using per-core means have too many
small runqueues. Using per-socket is almost always better, but it may result
in too few big runqueues.

OPTION 1 :
--------
The user can create runqueue per-cpu using Xen boot parameter like below:

 credit2_runqueue=cpu

which would mean the following:
 - pCPU 0 belong to runqueue 0
 - pCPU 1 belong to runqueue 1
 - pCPU 2 belong to runqueue 2
 and so on.

OPTION 2 :
--------
Further user can be allowed to say something shown below :

 credit2_runqueue=0,1,4,5;2,3,6,7;8,9,12,13;10,11,14,15

or (with exactly the same meaning, but a perhaps more clear syntax):

 credit2_runqueue=[[0,1,4,5][2,3,6,7][8,9,12,13][10,11,14,15]]

which would mean the following:
 - pCPUs 0, 1, 4 and 5 belong to runqueue 0
 - pCPUs 2, 3, 6 and 7 belong to runqueue 1
 - pCPUs 8, 9, 12 and 13 belong to runqueue 2
 - pCPUs 10, 11, 14 and 15 belong to runqueue 3

---
PATCH 1/2 enables to create runqueue per-cpu [OPTION 1].
---

Signed-off-by: Praveen Kumar <kpraveen.lkml@xxxxxxxxx>

---
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index af457c1..2bc0013 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -301,6 +301,9 @@ integer_param("credit2_balance_over", 
opt_overload_balance_tolerance);
  * want that to happen basing on topology. At the moment, it is possible
  * to choose to arrange runqueues to be:
  *
+ * - per-cpu: meaning that there will be one runqueue per logical cpu. This
+ *            will happen when if the opt_runqueue parameter is set to 'cpu'.
+ *
  * - per-core: meaning that there will be one runqueue per each physical
  *             core of the host. This will happen if the opt_runqueue
  *             parameter is set to 'core';
@@ -322,11 +325,13 @@ integer_param("credit2_balance_over", 
opt_overload_balance_tolerance);
  * either the same physical core, the same physical socket, the same NUMA
  * node, or just all of them, will be put together to form runqueues.
  */
-#define OPT_RUNQUEUE_CORE   0
-#define OPT_RUNQUEUE_SOCKET 1
-#define OPT_RUNQUEUE_NODE   2
-#define OPT_RUNQUEUE_ALL    3
+#define OPT_RUNQUEUE_CPU    0
+#define OPT_RUNQUEUE_CORE   1
+#define OPT_RUNQUEUE_SOCKET 2
+#define OPT_RUNQUEUE_NODE   3
+#define OPT_RUNQUEUE_ALL    4
 static const char *const opt_runqueue_str[] = {
+    [OPT_RUNQUEUE_CPU] = "cpu",
     [OPT_RUNQUEUE_CORE] = "core",
     [OPT_RUNQUEUE_SOCKET] = "socket",
     [OPT_RUNQUEUE_NODE] = "node",
@@ -682,6 +687,8 @@ cpu_to_runqueue(struct csched2_private *prv, unsigned int 
cpu)
         BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID ||
                cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID);
 
+        if (opt_runqueue == OPT_RUNQUEUE_CPU)
+            continue;
         if ( opt_runqueue == OPT_RUNQUEUE_ALL ||
              (opt_runqueue == OPT_RUNQUEUE_CORE && same_core(peer_cpu, cpu)) ||
              (opt_runqueue == OPT_RUNQUEUE_SOCKET && same_socket(peer_cpu, 
cpu)) ||

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.