WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] paravirtualized vs HVM disk interference (85% vs 15%)

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] paravirtualized vs HVM disk interference (85% vs 15%)
From: "Protti, Duilio J" <duilio.j.protti@xxxxxxxxx>
Date: Mon, 26 Jan 2009 22:38:46 -0700
Accept-language: en-US
Acceptlanguage: en-US
Delivery-date: Mon, 26 Jan 2009 21:39:18 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcmAQYW6JmoDnts9S3SlKixorZ19LQ==
Thread-topic: paravirtualized vs HVM disk interference (85% vs 15%)
Hi,

We have found that exist a huge degradation in performance when doing I/O to a 
disk images contained in single files from a paravirtualized domain and from an 
HVM at the same time.

The problem was found in a Xen box with Fedora 8 x86_64 binaries installed (Xen 
3.1.0 + dom0 Linux 2.6.21). The test hardware was a rack mounted server with 
two 2.66 Ghz Xeon X5355 (4 cores each one, 128 Kb L1 cache and 8 Mb L2 cache), 
4Gb of RAM and one disk of 250 Gb.  Both paravirt and HVM domains have 512 Mb 
of RAM and 8 vcpu's and runs also a Fedora 8 x86_64 distro.

Stressing at the same time the paravirtualized and HVM guests with disk 
stressing tools like 'stress', bonnie++ or hand written dd's, the amount of 
disk activity is not fair shared among both. The paravirtualized domain shows 
on average 85% of I/O transfer rate in all the tests, against a poor 15% for 
the HVM domain (the iotop tool was used in dom0 to obtain the data about I/O 
transfers for qemu-dm and loopX processes).

Our 2 questions are:
- The same problem exists in the dom0 2.6.18 kernel? We cannot use the iotop 
tool to measure this on 2.6.18 since I/O per process accounting is not 
supported on it. However we suppose that could be possible put the two disk 
images in different HDD's and use the iostat tool from the sysstat suite (which 
allow to sees disk activity in a per-device basis) to observe the behavior.
- Is the behavior ok? We mean, we consider that this interference is 
undesirable for certain deployments (anyone with paravirtualized and HVM guests 
in the same box, like in example Linux+Windows2003, and you want a fair share 
of the disk transfer rate). What we are asking is if this is a consequence of 
the Xen design and a known behavior, and if there is a workaround to ameliorate 
the interference (the problem persists and in the same degree even if both 
guests perform writes below dom0's dirty_ratio).

Regards,

Duilio J. Protti,
Alejandro E. Paredes

Intel Argentina Software Development Center


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>