WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Xen 3.4.3 - bad IO performance on drbd devices

To: Xen Users <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Xen 3.4.3 - bad IO performance on drbd devices
From: Felix Botner <botner@xxxxxxxxxxxxx>
Date: Tue, 20 Jul 2010 10:46:34 +0200
Delivery-date: Tue, 20 Jul 2010 01:50:21 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi everyone,

i have two servers installed with a debian/lenny based os (64bit), a 
debian/sid based kernel 2.6.32-xen-amd64 and xen 3.4.3-4. Each server has two 
drbd devices (protocol c, formatted with ext4) and is primary for one of 
them. Every drbd pair has a dedicated network interface (a bond mode 0 
interface with two 1000 Mbps cards).

+------------------------------------------+
| server 1                                 |
| +----------------+  +------------------+ |
| | primary drbd0  |  |  secondary drbd1 | |
| | (protocol c)   |  | (protocol c)     | |
| +----------------+  +------------------+ |
|         |                     |          |           
|         |                     |          |
| +----------------+  +------------------+ |
| | +----+ +----+  |  | +----+ +----+    | |
| | |eth2| |eth3|  |  | |eth4| |eth5|    | |
| | +----+ +----+  |  | +----+ +----+    | |
| |                |  |                  | |
| | bond0 (mode 0) |  |  bond1 (mode 0)  | |
| +-------|--------+  +--------|---------+ |
+---------|--------------------|-----------+
          |                    |
          |                    |
+---------|--------------------|-----------+
| +-------|--------+  +--------|---------+ |
| | bond0 (mode 0) |  |  bond1  (mode 0) | |
| |                |  |                  | |
| | +----+ +----+  |  | +----+ +----+    | |
| | |eth2| |eth3|  |  | |eth4| |eth5|    | |
| | +----+ +----+  |  | +----+ +----+    | |
| +----------------+  +------------------+ |
|         |                     |          |
|         |                     |          |
| +-----------------+  +-----------------+ |
| | (protocol c)    |  | (protocol c)    | |
| | secondary drbd0 |  | primary drbd1   | |
| +----------------+  +------------------+ |
| server 2                                 |
+------------------------------------------+

The io performance on the connected drbd devices is significantly worse if i 
start the kernel with the xen hypervisor (with "kernel /boot/xen-3.4.3.gz"). 
Without the hypervisor (but the same kernel) the systems are about 50% 
faster.

bonnie (bonnie++ -f  -d /mnt/drbd0(1) -s 60000 -u root) on the system (started 
with xen hypervisor) gives me 115825 K/sec (Sequential Output, Block) on the 
average. 

server1,60000M,,,117217,23,73652,20,,,189724,24,475.0,0,16,29704,64,+++++,+++,
+++++,+++,+++++,+++,+++++,+++,+++++,+++
server2,60000M,,,120874,26,56761,16,,,359506,42,485.5,0,16,28287,64,+++++,+++,
+++++,+++,+++++,+++,+++++,+++,+++++,+++

server1,60000M,,,112371,22,73340,20,,,192337,24,478.2,0,16,31139,75,+++++,+++,
+++++,+++,+++++,+++,+++++,+++,+++++,+++
server2,60000M,,,117687,25,58340,17,,,355595,42,565.1,0,16,+++++,+++,+++++,
+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

bonnie without the hypervisor (but the same kernel and modules) gives me 
212654 K/sec (Sequential Output, Block).

server1,60000M,,,236265,33,76183,9,,,278127,15,722.6,0,16,+++++,+++,+++++,+++,
+++++,+++,+++++,+++,+++++,+++,+++++,+++
server2,60000M,,,192337,31,77391,9,,,297503,16,677.5,0,16,+++++,+++,+++++,+++,
+++++,+++,+++++,+++,+++++,+++,+++++,+++

server1,60000M,,,236486,35,82206,9,,,256470,13,744.6,1,16,+++++,+++,+++++,+++,
+++++,+++,+++++,+++,+++++,+++,+++++,+++
server2,60000M,,,185530,29,79903,9,,,318024,18,748.9,1,16,+++++,+++,+++++,+++,
+++++,+++,+++++,+++,+++++,+++,+++++,+++


Why is there such a difference?
Can i optimize my xend (i already added dom0_mem=2048M dom0_max_vcpus=2 
dom0_vcpus_pin as boot parameter with no effect)?
Are there any known issues using XEN and bonding/drbd?

Feel free to ask for more information about the system or the setup.

Many thanks

-- 
Felix Botner

Open Source Software Engineer

Univention GmbH
Linux for your business
Mary-Somerville-Str.1
28359 Bremen
Tel. : +49 421 22232-0
Fax : +49 421 22232-99

botner@xxxxxxxxxxxxx>
http://www.univention.de

Geschäftsführer: Peter H. Ganten
HRB 20755 Amtsgericht Bremen
Steuer-Nr.: 71-597-02876 


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>