[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.



Hi, Emmanuel

 Thank you for your patches.
I tested on my environment

1)Credit w/ Boost
2)Credit(previous)
3)SEDF(previous)

  1   2    3
 44  16   33
133  66  133
533 266  266
(Kbps)

With this patches, the CREDIT scheduler changed for I/O aware.
(At vcpu_wake, the priority changes from UNDER to BOOST,
At vcpu_acct, the priority changes from BOOST to UNDER.) 

It seems reasonable fixes!
But I am afraid many I/O intensive GuestOSes are running.
(I hope this prospect is needless fear.)

Thanks
Atsushi SAKAI


>Thanks for sending me the full logs!
>
>I took a look and I do indeed see some cycles during which
>dom0 and the I/O generating domU don't preempt the spinners.
>I believe this is because those domains don't always consume
>enough CPU to appear in the accounting paths.
>
>I have coded up a fix which should make things better for
>I/O intensive domains that use few CPU resources. I am
>including the patch here. It applies to tip of untable.
>
>Can you try out this patch and let me know how it works?
>
>Thanks,
>Emmanuel.

>diff -r 0c7923eb6b98 xen/common/sched_credit.c
>--- a/xen/common/sched_credit.c        Wed Oct 25 10:27:03 2006 +0100
>+++ b/xen/common/sched_credit.c        Wed Oct 25 11:11:22 2006 +0100
>@@ -46,6 +46,7 @@
> /*
>  * Priorities
>  */
>+#define CSCHED_PRI_TS_BOOST      0      /* time-share waking up */
> #define CSCHED_PRI_TS_UNDER     -1      /* time-share w/ credits */
> #define CSCHED_PRI_TS_OVER      -2      /* time-share w/o credits */
> #define CSCHED_PRI_IDLE         -64     /* idle */
>@@ -410,6 +411,14 @@ csched_vcpu_acct(struct csched_vcpu *svc
>
>         spin_unlock_irqrestore(&csched_priv.lock, flags);
>     }
>+
>+    /*
>+     * If this VCPU's priority was boosted when it last awoke, reset it.
>+     * If the VCPU is found here, then it's consuming a non-negligeable
>+     * amount of CPU resources and should no longer be boosted.
>+     */
>+    if ( svc->pri == CSCHED_PRI_TS_BOOST )
>+        svc->pri = CSCHED_PRI_TS_UNDER;
> }
>
> static inline void
>@@ -566,6 +575,25 @@ csched_vcpu_wake(struct vcpu *vc)
>     else
>         CSCHED_STAT_CRANK(vcpu_wake_not_runnable);
>
>+    /*
>+     * We temporarly boost the priority of awaking VCPUs!
>+     *
>+     * If this VCPU consumes a non negligeable amount of CPU, it
>+     * will eventually find itself in the credit accounting code
>+     * path where its priority will be reset to normal.
>+     *
>+     * If on the other hand the VCPU consumes little CPU and is
>+     * blocking and awoken a lot (doing I/O for example), its
>+     * priority will remain boosted, optimizing it's wake-to-run
>+     * latencies.
>+     *
>+     * This allows wake-to-run latency sensitive VCPUs to preempt
>+     * more CPU resource intensive VCPUs without impacting overall
>+     * system fairness.
>+     */
>+    if ( svc->pri == CSCHED_PRI_TS_UNDER )
>+        svc->pri = CSCHED_PRI_TS_BOOST;
>+
>     /* Put the VCPU on the runq and tickle CPUs */
>     __runq_insert(cpu, svc);
>     __runq_tickle(cpu, svc);
>@@ -659,7 +687,7 @@ csched_runq_sort(unsigned int cpu)
>         next = elem->next;
>         svc_elem = __runq_elem(elem);
>
>-        if ( svc_elem->pri == CSCHED_PRI_TS_UNDER )
>+        if ( svc_elem->pri >= CSCHED_PRI_TS_UNDER )
>         {
>             /* does elem need to move up the runq? */
>             if ( elem->prev != last_under )

>_______________________________________________
>Xen-devel mailing list
>Xen-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-devel








_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.