WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] sEDF Scheduling question

To: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] sEDF Scheduling question
From: "Koen van Besien" <koen.vanbesien@xxxxxxxxx>
Date: Sun, 18 May 2008 14:55:00 +0200
Delivery-date: Sun, 18 May 2008 05:55:31 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; bh=1aJA3WD1dhBoG1BZKfVi+guV3f8w+V2lJsw6Bf0XN7c=; b=nxtiCHjj8RXoOPXlzKTEklg2X8jVzLhcE4WB5LIUjz7MYvjIqjBbBAbK3KW5tbeEyEkoidJAw31q7MZGkyQU7K6cxvfAZrOvIrSH/ij+FRP9x9jGaJ8VgLpou13tyR6p0CgAZva3oiHvoL46zVUIwNqLahE5BLhZvqB3D2jP61Q=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=LlzZxJUIQNZ0E2WBJndkbshqm33pcVYs/7+uUXuX+dY7K2rZRBM7dzXfbZCtNMjAJZ54aST1FGDlzV5hFRAUuh5+UeQMG+foF1cy5/B+1KZwuf26VKxqqlS3q6snJee6fiIMBO9TShbCM0DyQ2fdnlqvr2eNRxFnBhQHQrjFWBk=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi,

I was wondering if there is some more information about the sEDF
scheduler and its parameters.
Or people who have some experience with it.

The information I have found is from three papers concerning Xen.

* comparison of three cpu schedulers
www.hpl.hp.com/personal/Lucy_Cherkasova/papers/per-3sched-xen.pdf

* Xen and Co
csl.cse.psu.edu/publications/vee07.pdf

* Scheduling I/O in virtual machine monitors
www.cs.rice.edu/CS/Architecture/docs/ongaro-vee08.pdf


What I'm trying to do is to figure out what the best options are for
NFS performance and later on for webserver performance.
So I want to check what the influence is of the period and slice
allocation and also the impact of
being work conserving or not. ( the extra-parameter of sedf).
Later on I want to do this for the credit scheduler which should be
better on my dual core machine.

This is my machine setup:

Debian etch with 3 debian domUs all with LVM
Each has 256 MB RAM and dom0 has 1 GB of RAM.

o       AMD Athlon 64 X2 Dual Core 5000+
o       4 x 512 MB DDR ram: Total 2GB
o       Seagate 160GB S-ATA II 8MB
o       Debian Etch 64 bit / LVM / ext 3


I have 3 domUs running. One domU is running an nfs client , another
one the nfs server. Also dom0 is running an nfs server.
The third one is either idle or has a 100% cpu load via the stress
tool: http://weather.ou.edu/~apw/projects/stress/.

I write and read a file via the dd command on the NFS share which is
mounted via tcp
(mount -t nfs -o tcp 10.10.3.166:/home /mnt/helios166/)

write = dd if=/dev/zero of=./big.file bs=1M  count=500
read =  dd if=./big.file  of=/dev/null bs=1M


I shortly summarize my results:

First test ( period 10 ms work conserving)
========

* period of 10 ms work conserving
      xm sched-sedf 0 -p 10 -s 5 -l 0 -e 1 and this for all other
domains the same

* with the third vm running idle I get these results
     domU write = 35 sec
     domU read = 44 sec

     dom0 write = 16 sec
     dom0 read = 22 sec

* with the third vm running a 100% cpu load
     domU write = 31 sec
     domU read = 87 sec

     dom0 write = 19 sec
     dom0 read = 170 sec


second test  ( period 10 ms non work conserving)
========

* period of 10 ms NON work conserving
      xm sched-sedf 0 -p 10 -s 5 -l 0 -e 0 and this for all other
domains the same

* with the third vm running idle I get these results
     domU write = 104 sec
     domU read = 94 sec

     dom0 write = 59 sec
     dom0 read = 115 sec

* with the third vm running a 100% cpu load
     domU write = 110 sec
     domU read = 97 sec

     dom0 write = 95 sec
     dom0 read = 120 sec


third test ( period of 100 ms work conserving)
========

* period of 100 ms work conserving
      xm sched-sedf 0 -p 100 -s 50 -l 0 -e 1 and this for all other
domains the same

* with the third vm running idle I get these results
     domU write = 27 sec
     domU read = 86 sec

     dom0 write = 20 sec
     dom0 read = 14 sec

* with the third vm running a 100% cpu load
     domU write = 57 sec
     domU read = 74 sec

     dom0 write = 34 sec
     dom0 read = 30 sec

fourth test ( period of 100 ms NON work conserving)
========

* period of 100 ms NON work conserving
      xm sched-sedf 1 -p 100 -s 50 -l 0 -e 0 and this for all other
domains the same
      REMARK= setting dom0 to be NON work conserving gives weird
problems with it. Its constantly using 80% cpu

* with the third vm running idle I get these results
     domU write = 17 sec
     domU read = 12.8 sec

     dom0 write = 15.68 sec
     dom0 read = 17.5 sec

* with the third vm running a 100% cpu load
     domU write = 15.8 sec
     domU read = 16 sec

     dom0 write = 16.59 sec
     dom0 read = 13 sec


So concluding out of these results when you use NWC mode with a longer
period you get the
best results?

I it is quick testing and I just want to figure out what the exact
parameters are an the influence of them.
Are there some more benchmarks about this?

When I do a tcpdump on the nfs client I get a lot of duplicate acks
and TCP retransmission errors?
How is this possible as everything is virtual?

Are their some recommendation about which parameters to give to dom0 ?
I would guess that because
its not using a lot of cpu I should not given a lot of cpu share? Or
is it really important for I/O interrupt handling
that the dom0 is scheduled regularly?

I also don't know what the latency parameter is about?

So if people have some experiences about this or can give me some pointers


greetings

Koen

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>