[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] cpuidle: fix the menu governor to enhance IO performance


  • To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "Wei, Gang" <gang.wei@xxxxxxxxx>
  • From: "Yu, Ke" <ke.yu@xxxxxxxxx>
  • Date: Thu, 10 Dec 2009 19:26:13 +0800
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc: Xen-Devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 10 Dec 2009 03:27:02 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acp5i5TMn8BmiFjEQbGiGEQVN7vBJA==
  • Thread-topic: [PATCH] cpuidle: fix the menu governor to enhance IO performance

cpuidle: fix the menu governor to enhance IO performance

this is a revised version of linux upstream commit 
69d25870f20c4b2563304f2b79c5300dd60a067e:
"
    cpuidle: fix the menu governor to boost IO performance

    Fix the menu idle governor which balances power savings, energy efficiency
    and performance impact.

    The reason for a reworked governor is that there have been serious
    performance issues reported with the existing code on Nehalem server
    systems.

    To show this I'm sure Andrew wants to see benchmark results:
    (benchmark is "fio", "no cstates" is using "idle=poll")

            no cstates  current linux   new algorithm
    1 disk      107 Mb/s    85 Mb/s     105 Mb/s
    2 disks     215 Mb/s    123 Mb/s    209 Mb/s
    12 disks    590 Mb/s    320 Mb/s    585 Mb/s

    In various power benchmark measurements, no degredation was found by our
    measurement&diagnostics team.  Obviously a small percentage more power was
    used in the "fio" benchmark, due to the much higher performance.

    Signed-off-by: Arjan van de Ven <arjan@xxxxxxxxxxxxxxx>
    Cc: Venkatesh Pallipadi <venkatesh.pallipadi@xxxxxxxxx>
    Cc: Len Brown <lenb@xxxxxxxxxx>
    Cc: Ingo Molnar <mingo@xxxxxxx>
    Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
    Cc: Yanmin Zhang <yanmin_zhang@xxxxxxxxxxxxxxx>
    Acked-by: Ingo Molnar <mingo@xxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
"

    in Xen version, most logic is similar and with only one exception: linux 
use nr_iowait
    and loadavg to track the pending I/O request, which however is not visible 
to Xen. so Xen
    use the do_irq frequency to estimate the I/O pressure. this is not as 
accurate as linux,
    and the better approach is to convey guest latency requirement to 
hypervisor by virtual C
    state. this can be the future enhancement.

    the detail algorithm description is in code comment. with this new 
algorithm, fio
    benchmark performance improve ~5% with 1 disk. and no power degration is 
found in
    idle case.

    Signed-off-by: Yu Ke <ke.yu@xxxxxxxxx>

diff -r 8f304c003af4 xen/arch/x86/acpi/cpuidle_menu.c
--- a/xen/arch/x86/acpi/cpuidle_menu.c
+++ b/xen/arch/x86/acpi/cpuidle_menu.c
@@ -30,26 +30,154 @@
 #include <xen/acpi.h>
 #include <xen/timer.h>
 #include <xen/cpuidle.h>
+#include <asm/irq.h>
 
-#define BREAK_FUZZ      4       /* 4 us */
-#define PRED_HISTORY_PCT   50
-#define USEC_PER_SEC 1000000
+#define BUCKETS 6
+#define RESOLUTION 1024
+#define DECAY 4
+#define MAX_INTERESTING 50000
+
+/*
+ * Concepts and ideas behind the menu governor
+ *
+ * For the menu governor, there are 3 decision factors for picking a C
+ * state:
+ * 1) Energy break even point
+ * 2) Performance impact
+ * 3) Latency tolerance (TBD: from guest virtual C state)
+ * These these three factors are treated independently.
+ *
+ * Energy break even point
+ * -----------------------
+ * C state entry and exit have an energy cost, and a certain amount of time in
+ * the  C state is required to actually break even on this cost. CPUIDLE
+ * provides us this duration in the "target_residency" field. So all that we
+ * need is a good prediction of how long we'll be idle. Like the traditional
+ * menu governor, we start with the actual known "next timer event" time.
+ *
+ * Since there are other source of wakeups (interrupts for example) than
+ * the next timer event, this estimation is rather optimistic. To get a
+ * more realistic estimate, a correction factor is applied to the estimate,
+ * that is based on historic behavior. For example, if in the past the actual
+ * duration always was 50% of the next timer tick, the correction factor will
+ * be 0.5.
+ *
+ * menu uses a running average for this correction factor, however it uses a
+ * set of factors, not just a single factor. This stems from the realization
+ * that the ratio is dependent on the order of magnitude of the expected
+ * duration; if we expect 500 milliseconds of idle time the likelihood of
+ * getting an interrupt very early is much higher than if we expect 50 micro
+ * seconds of idle time.
+ * For this reason we keep an array of 6 independent factors, that gets
+ * indexed based on the magnitude of the expected duration
+ *
+ * Limiting Performance Impact
+ * ---------------------------
+ * C states, especially those with large exit latencies, can have a real
+ * noticable impact on workloads, which is not acceptable for most sysadmins,
+ * and in addition, less performance has a power price of its own.
+ *
+ * As a general rule of thumb, menu assumes that the following heuristic
+ * holds:
+ *     The busier the system, the less impact of C states is acceptable
+ *
+ * This rule-of-thumb is implemented using a performance-multiplier:
+ * If the exit latency times the performance multiplier is longer than
+ * the predicted duration, the C state is not considered a candidate
+ * for selection due to a too high performance impact. So the higher
+ * this multiplier is, the longer we need to be idle to pick a deep C
+ * state, and thus the less likely a busy CPU will hit such a deep
+ * C state.
+ *
+ * Currently one factors are used in determing this multiplier:
+ * the do_irq frequency during sampling period (5 milisec), and 4X
+ * multiplier is added to irq frequency.
+ * (these values are experimentally determined)
+ *
+ */
+
+struct perf_factor{
+    unsigned int last_irq_count;
+    unsigned int irq_count_sum;
+    s_time_t    time_stamp;
+    unsigned int factor;
+};
 
 struct menu_device
 {
     int             last_state_idx;
     unsigned int    expected_us;
-    unsigned int    predicted_us;
-    unsigned int    current_predicted_us;
-    unsigned int    last_measured_us;
-    unsigned int    elapsed_us;
+    u64             predicted_us;
+    unsigned int    measured_us;
+    unsigned int    exit_us;
+    unsigned int    bucket;
+    u64             correction_factor[BUCKETS];
+    struct perf_factor pf;
 };
 
 static DEFINE_PER_CPU(struct menu_device, menu_devices);
 
+static inline int which_bucket(unsigned int duration)
+{
+   int bucket = 0;
+
+   if (duration < 10)
+       return bucket;
+   if (duration < 100)
+       return bucket + 1;
+   if (duration < 1000)
+       return bucket + 2;
+   if (duration < 10000)
+       return bucket + 3;
+   if (duration < 100000)
+       return bucket + 4;
+   return bucket + 5;
+}
+
+/*
+ * Return a multiplier for the exit latency that is intended
+ * to take performance requirements into account.
+ * The more performance critical we estimate the system
+ * to be, the higher this multiplier, and thus the higher
+ * the barrier to go to an expensive C state.
+ */
+
+/* 5 milisec sampling period */
+#define SAMPLING_PERIOD     5000000
+
+/*  4x experimental multiplier for IO intensive */
+#define IO_MILTIPLIER        4
+
+static inline int performance_multiplier(void)
+{
+    int mult = 1;
+    unsigned int factor, irq_count_delta;
+    struct menu_device *data = &__get_cpu_var(menu_devices);
+    s_time_t    duration, now;
+
+    now = NOW();
+    duration = now - data->pf.time_stamp;
+
+    irq_count_delta = IO_MILTIPLIER *
+        (this_cpu(irq_count) - data->pf.last_irq_count);
+
+    if ( duration < SAMPLING_PERIOD){
+        mult += (data->pf.factor + irq_count_delta * (DECAY-1)) / DECAY;
+    }
+    else{
+        factor = irq_count_delta * SAMPLING_PERIOD / duration;
+        data->pf.factor = (data->pf.factor + factor * (DECAY-1)) / DECAY;
+        data->pf.time_stamp = now;
+        data->pf.last_irq_count = this_cpu(irq_count);
+        mult += data->pf.factor;
+    }
+
+    return mult;
+}
+
 static unsigned int get_sleep_length_us(void)
 {
-    s_time_t us = (per_cpu(timer_deadline, smp_processor_id()) - NOW()) / 1000;
+    s_time_t us = DIV_ROUND_UP(this_cpu(timer_deadline) - NOW() , 1000);
     /*
      * while us < 0 or us > (u32)-1, return a large u32,
      * choose (unsigned int)-2000 to avoid wrapping while added with exit
@@ -62,57 +190,86 @@ static int menu_select(struct acpi_proce
 {
     struct menu_device *data = &__get_cpu_var(menu_devices);
     int i;
+    int multiplier;
 
-    /* determine the expected residency time */
+    /*  TBD: Change to 0 if C0(polling mode) support is added later*/
+    data->last_state_idx = CPUIDLE_DRIVER_STATE_START;
+    data->exit_us = 0;
+
+    /* determine the expected residency time, round up */
     data->expected_us = get_sleep_length_us();
 
-    /* Recalculate predicted_us based on prediction_history_pct */
-    data->predicted_us *= PRED_HISTORY_PCT;
-    data->predicted_us += (100 - PRED_HISTORY_PCT) *
-        data->current_predicted_us;
-    data->predicted_us /= 100;
+    data->bucket = which_bucket(data->expected_us);
+
+    multiplier = performance_multiplier();
+
+    /*
+     * if the correction factor is 0 (eg first time init or cpu hotplug
+     * etc), we actually want to start out with a unity factor.
+     */
+    if (data->correction_factor[data->bucket] == 0)
+        data->correction_factor[data->bucket] = RESOLUTION * DECAY;
+
+    /* Make sure to round up for half microseconds */
+    data->predicted_us = DIV_ROUND(
+            data->expected_us * data->correction_factor[data->bucket],
+            RESOLUTION * DECAY);
 
     /* find the deepest idle state that satisfies our constraints */
-    for ( i = 2; i < power->count; i++ )
+    for ( i = CPUIDLE_DRIVER_STATE_START + 1; i < power->count; i++ )
     {
         struct acpi_processor_cx *s = &power->states[i];
 
-        if ( s->target_residency > data->expected_us + s->latency )
+        if (s->target_residency > data->predicted_us)
             break;
-        if ( s->target_residency > data->predicted_us )
+        if (s->latency * multiplier > data->predicted_us)
             break;
         /* TBD: we need to check the QoS requirment in future */
+        data->exit_us = s->latency;
+        data->last_state_idx = i;
     }
 
-    data->last_state_idx = i - 1;
-    return i - 1;
+    return data->last_state_idx;
 }
 
 static void menu_reflect(struct acpi_processor_power *power)
 {
     struct menu_device *data = &__get_cpu_var(menu_devices);
-    struct acpi_processor_cx *target = &power->states[data->last_state_idx];
-    unsigned int last_residency; 
+    unsigned int last_idle_us = power->last_residency;
     unsigned int measured_us;
+    u64 new_factor;
 
-    last_residency = power->last_residency;
-    measured_us = last_residency + data->elapsed_us;
+    measured_us = last_idle_us;
 
-    /* if wrapping, set to max uint (-1) */
-    measured_us = data->elapsed_us <= measured_us ? measured_us : -1;
+    /*
+     * We correct for the exit latency; we are assuming here that the
+     * exit latency happens after the event that we're interested in.
+     */
+    if (measured_us > data->exit_us)
+        measured_us -= data->exit_us;
 
-    /* Predict time remaining until next break event */
-    data->current_predicted_us = max(measured_us, data->last_measured_us);
+    /* update our correction ratio */
 
-    /* Distinguish between expected & non-expected events */
-    if ( last_residency + BREAK_FUZZ
-         < data->expected_us + target->latency )
-    {
-        data->last_measured_us = measured_us;
-        data->elapsed_us = 0;
-    }
+    new_factor = data->correction_factor[data->bucket]
+        * (DECAY - 1) / DECAY;
+
+    if (data->expected_us > 0 && data->measured_us < MAX_INTERESTING)
+        new_factor += RESOLUTION * measured_us / data->expected_us;
     else
-        data->elapsed_us = measured_us;
+        /*
+         * we were idle so long that we count it as a perfect
+         * prediction
+         */
+        new_factor += RESOLUTION;
+
+    /*
+     * We don't want 0 as factor; we always want at least
+     * a tiny bit of estimated time.
+     */
+    if (new_factor == 0)
+        new_factor = 1;
+
+    data->correction_factor[data->bucket] = new_factor;
 }
 
 static int menu_enable_device(struct acpi_processor_power *power)
diff -r 8f304c003af4 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -517,6 +517,8 @@ void irq_set_affinity(int irq, cpumask_t
     cpus_copy(desc->pending_mask, mask);
 }      
 
+DEFINE_PER_CPU(unsigned int, irq_count);
+
 asmlinkage void do_IRQ(struct cpu_user_regs *regs)
 {
     struct irqaction *action;
@@ -527,6 +529,8 @@ asmlinkage void do_IRQ(struct cpu_user_r
     struct cpu_user_regs *old_regs = set_irq_regs(regs);
     
     perfc_incr(irqs);
+
+    this_cpu(irq_count)++;
 
     if (irq < 0) {
         ack_APIC_irq();
diff -r 8f304c003af4 xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -105,6 +105,8 @@ extern atomic_t irq_err_count;
 extern atomic_t irq_err_count;
 extern atomic_t irq_mis_count;
 
+DECLARE_PER_CPU(unsigned int, irq_count);
+
 int pirq_shared(struct domain *d , int irq);
 
 int map_domain_pirq(struct domain *d, int pirq, int irq, int type,
diff -r 8f304c003af4 xen/include/xen/cpuidle.h
--- a/xen/include/xen/cpuidle.h
+++ b/xen/include/xen/cpuidle.h
@@ -86,4 +86,6 @@ extern struct cpuidle_governor *cpuidle_
 extern struct cpuidle_governor *cpuidle_current_governor;
 void cpuidle_disable_deep_cstate(void);
 
+#define CPUIDLE_DRIVER_STATE_START  1
+
 #endif /* _XEN_CPUIDLE_H */
diff -r 8f304c003af4 xen/include/xen/lib.h
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -44,6 +44,7 @@ do {                                    
    do { typeof(_a) _t = (_a); (_a) = (_b); (_b) = _t; } while ( 0 )
 
 #define DIV_ROUND(x, y) (((x) + (y) / 2) / (y))
+#define DIV_ROUND_UP(x,y) (((x) + (y) - 1) / (y))
 
 #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]) + __must_be_array(x))

Attachment: cpuidle-io.patch
Description: cpuidle-io.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.