WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] All VMs are "blocked", terrible performance

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] All VMs are "blocked", terrible performance
From: Madison Kelly <linux@xxxxxxxxxxx>
Date: Sat, 07 Nov 2009 00:26:28 -0500
Delivery-date: Fri, 06 Nov 2009 21:27:06 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.23 (X11/20090817)
Hi all,

I've built up a handful of VMs on my 2-node cluster and all are showing as being in a blocked state. The performance is terrible, too. All VMs are currently on one node (another problem, maybe related? is keeping me from migrating any).

My Setup:
(each node)
2x Quad Core AMD Opteron 2347 HE
32GB RAM (16GB/CPU)

Nodes have a DRBD partition running cluster-aware LVM for all domU VMs. Each VM has it's own logical volume. The DRBD has a dedicate gigabit link and DRBD is using 'Protocol C', as required. LVM is set to use 'locking_type=3'.

Here's what I see:

# xm list
Name       ID Mem(MiB) VCPUs State   Time(s)
Domain-0    0    32544     8 r-----  22400.8
auth01     10     1023     1 -b----   3659.7
dev01      22     8191     1 -b----    830.2
fw01       11     1023     1 -b----   1046.9
res01      23     2047     1 -b----    812.1
sql01      24    16383     1 -b----    817.0
web01      20     2047     1 -b----   1156.3
web02      21     1023     1 -b----    931.1

When I ran that, all VMs were running yum update (all but two were fresh installs).

Any idea what's causing this and/or why my performance is so bad? Each VM is taking minutes to install each updated RPM.

In case it's related, when I tried to do a live migration of a VM from one node to the other, I got an error saying:

VM Disk dom0_vg0/web02.disk.xm not found on the destination node vsh03

However, '/proc/drbd' shows both nodes are sync'ed and in Primary/Primary mode. Also, both nodes have identical output from 'lvdisplay' (all LVs are 'active') and all LVs were created during the provision on the first node.

Kinda stuck here, so any input will be greatly appreciated! Let me know if I can post anything else useful.

Madi

PS: dom0 and all domU are running CentOS 5.4 x86_64

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>