WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Xen domU physical partition disk I/O write throughput %50 sl

To: <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Xen domU physical partition disk I/O write throughput %50 slower
From: "Hills, Steve" <Steve.Hills@xxxxxxxxxxxx>
Date: Tue, 28 Sep 2010 17:53:15 -0400
Delivery-date: Tue, 28 Sep 2010 14:54:39 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: ActfV42gwWk7v269SVi//JnkTIdDJg==
Thread-topic: Xen domU physical partition disk I/O write throughput %50 slower

dom0 is SLES 11 SP1, domU's are paravirtualized SLES 10/11. Local physical disk partition attached to domU via "phy:" (3 SAS disks in RAID1+).

Write throughput is %50 less than compared to write throughput to the same partition from domU. Read throughput is roughly equivalent. Tested via bonnie (e.g. with data size x2 phy mem) and "dd conv=fdatasync" where the partition is empty (no files).  The domU cpu usage is small during the tests.

I've played around with the "Xen best practices", various memory/cpu sizes with dom0/domU, elevator=noop, but there is always a %50 difference.

The question is how do I eliminate/find the bottleneck? I don't see any way to tweak the Xen split drivers/hypervisor in regards to block I/O.

Thanks,
Steve Hills
Teradata Corporation
steve.hills@xxxxxxxxxxxx

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>