[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Credit scheduler vs SEDF scheduler


  • To: gaurav somani <onlineengineer@xxxxxxxxx>
  • From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
  • Date: Tue, 5 May 2009 11:47:18 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 05 May 2009 03:47:46 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=OMlSMwstVhrmZXgD2rBsXRV5hL7MLTK5cJMWD1QiBQyjphxFlU0tDpZgJnNfs7IJW5 5hh4zEcfoYUTlu1wK8z8CLry1DUsTwEtYZ5Q3f1BAPMARVZ6YySj1kgj7LgZnsogQM/o Jnw6p9gJOqiDGvQQd3kkqG+ux4Fgbg35X3bZ4=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

A couple of comments:

* Why did you pin the vcpus to pcpus?  AIUI, pings will always be
handled by vcpu 0.  So if you pin the vcpus to pcpu0, and pcpu0 is
busy, it can't migrate over to pcpu1 if it's not busy.  Try unpinning
the cpus and see if that changes anything.
* The Credit scheduler is known to have some issues with
latency-sensitive workloads.  Workloads like pass-though video are
becoming more important, so there's been a lot of discussion about
this subject.  I'm working on a new scheduler, credit2, that will
hopefully address a lot of these issues.
* "Ping" is not an application that people find it important to
virtualize. :-)  Remember that end-to-end application performance and
fairness are the high-level goals, so although "ping" may be a useful
test to isolate certain aspects of a scheduler, it should never be
used to evaluate the "goodness" of one scheduler over another.
* That said, it's not clear to me (given what I know of sched_credit
and ping) why you'd see these results.

 -George

On Tue, May 5, 2009 at 10:25 AM, gaurav somani <onlineengineer@xxxxxxxxx> wrote:
> Hi list,
>
> I am evaluating the scheduler behavior in xen.
>
> I am using Xen 3.3.0
> Dom0 and Dom1,2,3 and 4 all are opensuse 11.
> I have one CPU Intensive TEST which has no. of arithmatic instruction in an
> infinite while() loop.
> i am pinging domain1 with an external machine. and noting the RTT values.
>
> i have below experiments
> time (s)          domain state
> 0                 dom0,1,2,3,4 all idle
> 50               dom2 TEST started
> 100             dom3 TEST started
> 150             dom4 TEST started
> 200             dom0 TEST started
> 250             dom2 TEST stopped
> 300             dom3 TEST stopped
> 350             dom4 TEST stopped
> 400             dom0 TEST stopped
>
> For these 400 seconds time, i have performed experiments with Credit and
> SEDF sceduler.
> the configuration is
>
>
> Credit configuration - weight 256, cap 0
> Domain                  VCPU
> 0                                  2
> 1                                  2
> 2                                  2
> 3                                  2
> 4                                  2
> all vcpu0s are pinned to pcpu0 and vcpu1s to pcpu1.
>
>
> SEDF configuration - Period 10ms slice - 1.9ms
> Domain                  VCPU
> 0                                  2
> 1                                  2
> 2                                  2
> 3                                  2
> 4                                  2
> all vcpu0s are pinned to pcpu0 and vcpu1s to pcpu1.
>
> the results of RTT values are attached herewith. the performance of Credit
> is very bad in comparison to SEDF in this scenario.
> Please provide me some thought on it.
>
>
> Thanks and Regards
>
> Gaurav somani
> M.Tech (ICT)
> Dhribhai Ambani Institute of ICT,
> INDIA
>
> http://dcomp.daiict.ac.in/~gaurav
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.