WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] memory squeeze in netback driver

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] memory squeeze in netback driver
From: Greg Woods <woods@xxxxxxxx>
Date: Sat, 23 Apr 2011 10:30:18 -0600
Delivery-date: Sat, 23 Apr 2011 09:31:31 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
I am running CentOS 5.6 with the native version of Xen for that distro
(claims to be 3.0.3 but I seem to remember it's actually newer than
that). I am also running a high availability two-node cluster with
heartbeat and pacemaker. In normal operation, all the DomU's are split
between the two servers. When one server is taken for maintenance, then
all the DomU's run on one server. This really shouldn't be a problem; in
this mode, "xm top" shows that Dom0 still has 46% of the memory; it does
not appear to be short of RAM. 

At first, all looks good and as long as I don't mess with it, the DomU's
all function fine in this mode. I do have a couple of VM's set up that
normally do not run. They are there only to be cloned. But as such, I
have to keep them updated, which means they have to be manually started
now and then to apply updates. This is where the trouble starts.

When the DomU's are split between both servers, there is no issue. But
when the DomU's are all on one node, then as soon as I start one of
these extra DomU's, I start seeing these messages:

Apr 23 09:31:57 vmx2 kernel: xen_net: Memory squeeze in netback driver.
Apr 23 09:31:59 vmx2 kernel: xenbr0: port 20(vif49.0) entering disabled
state

Once this happens, all the VM's become unreachable, either from outside
the cluster or from the Dom0. Stopping the extra DomU restores service
immediately. The host system does not appear to be short of RAM as
above.

Does anyone know what causes this or what I could do to fix it? I am
hoping there is just some config parameter that I can specify to give
more of the available RAM to the netback driver (whatever that is) to
alleviate this.

Thanks in advance,
--Greg



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-users] memory squeeze in netback driver, Greg Woods <=