This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] sedf testing: volunteers please

To: Stephan Diestelhorst <sd386@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] sedf testing: volunteers please
From: Xuehai Zhang <hai@xxxxxxxxxxxxxxx>
Date: Sun, 19 Jun 2005 15:28:15 -0500 (CDT)
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 19 Jun 2005 20:27:21 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <428B3500.3040707@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <428B3500.3040707@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

I enabled the sedf scheduler by applying the patch to Xen testing tree not
the unstable tree.

Then I did the following test. I started two user domains (named "vm1" and
"vm2" respectively). I made the following sedf configurations:

xm sedf vm1 0 0 0 0 2
xm sedf vm2 0 0 0 0 8

My intention is to have vm1 reserve 20% of the available cpu and vm2
reserve the rest of 80% (please correct me if my understanding about
sedf here is wrong).

Then I start "slurp" job in both domains and it will print out the cpu
share continuously. To my surprise, vm1 takes around 4% of cpu and vm2
occpuies around 17% cpu. I was expecting they share the cpu something like
20% and 80% though the ratio of 4% and 17% is similar as that of 20% and
80%. BTW, dom0 didn't run any extra job when I ran the test.

Could you please let me know why only 21% (4%+17%) cpu is given to both
vm1 and vm2 not 100%-% taken by dom0?



On Wed, 18 May 2005, Stephan Diestelhorst wrote:

> The new sedf scheduler has been in the xen-unstable reopository for a
> couple of days now. As it may become the default scheduler soon, any
> testing now is much appreciated!
> Quick summary can be found in docs/misc/sedf_scheduler_mini-HOWTO.txt
> Future directions:
> -effective scheduling of SMP-guests
>   -clever SMP locking in domains (on the way)
>   -timeslice donating (under construction)
>   -identifying gangs and schedule them together
>   -balancing of domains/ VCPUs
> Any comments/wishes/ideas/... on that are welcome!
> Best,
>   Stephan Diestelhorst
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>