WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Double amount of READ DISK traffic on dom0 than from domU?

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Double amount of READ DISK traffic on dom0 than from domU?
From: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Date: Sat, 17 Jan 2009 09:42:43 +0100
Delivery-date: Sat, 17 Jan 2009 00:43:31 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Organization: it-management http://it-management.at
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.10.3 (Linux/2.6.27.10-ZMI; KDE/4.1.3; x86_64; ; )
Dear list,
did anybody of you see a similar behaviour? I have a test server with a 
single XEN VM running. In it, there's a PostgreSQL DB running a CLUSTER 
command, which basically copies the table in a new order. The domU VM 
said it's 35MB/s read and 35MB/s write, but the dom0 said it's about 
TWICE the amount on *read* only (79MB/s read, 35MB/s write). The command 
with which I watched was
iostat -kx 5 555
on both domU and dom0. The values were most of the time nearly exaclty 
domU-reads * 2 = dom0-reads. Writes were the same numbers both in domU 
and dom0. Of course the values vary a bit because the iostat command 
doesn't run exactly at the same time in domU and dom0, but I watched a 
long time and it's about twice as much reads on dom0. I would understand 
the other way round, that could be because of caching in dom0, but this 
results are strange. Anybody got an explanation?

Real Server dom0:
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda               0,00     0,00    0,00    0,00     0,00     0,00     0,00     
0,00    0,00   0,00   0,00
sdb               0,00     1,80 10254,40  231,80 79640,00 40258,40    22,87     
1,16    0,11   0,04  46,96

XEN VM domU:
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
xvda              0,00     0,00    0,00    0,00     0,00     0,00     0,00     
0,00    0,00   0,00   0,00
xvdb              0,00     0,00  839,92    0,00 35794,01     0,00    85,23     
4,90    5,83   1,18  99,00
xvdc              0,00     0,00    0,00    0,00     0,00     0,00     0,00     
0,00    0,00   0,00   0,00
xvdd              0,00     0,00    0,00  821,36     0,00 35823,55    87,23    
51,17   61,97   1,21  99,48

Later I tested again:

Real Server dom0:
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda               0,00     0,20    0,00    0,40     0,00     2,40    12,00     
0,00    0,00   0,00   0,00
sdb               0,20     0,00 3246,31   68,66 24988,42  7826,75    19,80     
0,38    0,11   0,07  24,43

XEN VM domU:
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
xvda              0,00     0,00    0,00    0,00     0,00     0,00     0,00     
0,00    0,00   0,00   0,00
xvdb              0,00     0,00    0,00    0,00     0,00     0,00     0,00     
0,00    0,00   0,00   0,00
xvdc              0,00     0,00    0,00    0,00     0,00     0,00     0,00     
0,00    0,00   0,00   0,00
xvdd              0,00     0,40  277,84  365,27 11681,44 15674,65    85,07    
26,68   41,49   0,83  53,49


-- 
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-users] Double amount of READ DISK traffic on dom0 than from domU?, Michael Monnerie <=