WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Memory squeeze in netback driver

To: "'Scott Moe'" <smoe868@xxxxxxxxx>
Subject: RE: [Xen-users] Memory squeeze in netback driver
From: "Roger Lucas" <roger@xxxxxxxxxxxxx>
Date: Thu, 11 Jan 2007 15:48:15 -0000
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 11 Jan 2007 07:48:29 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <716de6bb0701110723l1ead0cc0q44a0037eef7825ff@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <716de6bb0701110723l1ead0cc0q44a0037eef7825ff@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acc1lJx57CVNNli9Q/qdqOFVbX7XxwAAeiJQ

Hi Scott,

 

This has come up on the mailing list a few times before and I have seen it too on my system.

 

It happened to me because I had no limit on the Dom0 memory and this squeezed the memory available for the Xen hypervisor to use.  The solution (or at least, one of the solutions) is to modify the GRUB / LILO invocation to include a memory allowance for Dom0.  Mine is:

 

title Xen 3.0 / XenLinux 2.6

kernel /boot/xen-3.gz dom0_mem=256M

module /boot/vmlinuz-2.6-xen root=/dev/hda1 ro

module /boot/initrd.img-2.6.16-xen

 

This stops Dom0 from using all the unused memory and keeps the rest for the Xen Hypervisor:

 

xentop - 15:47:11   Xen 3.0.2-2

5 domains: 1 running, 3 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown

Mem: 2088508k total, 1359456k used, 729052k free    CPUs: 1 @ 2693MHz

      NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) SSID

 bigserver --b---       5486    0.0     262012   12.5     270336      12.9     1    1   968744  3329318    0

  Domain-0 -----r      45781    0.0     283704   13.6   no limit       n/a     1    8  2298855  1518669    0

  harpseal --b---     118258    0.0     261976   12.5     270336      12.9     1    1   408774  2551772    0

   octopus ------       6784    0.0     261600   12.5     262144      12.6     1    3  4466856  4630781    0

 tarantula --b---      13707    0.0     261924   12.5     262144      12.6     1    1   535467  1643629    0

 

This stopped the problems immediately, but it does require a full reboot…

 

Hope this helps,

 

Roger

 


From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Scott Moe
Sent: 11 January 2007 15:23
To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Memory squeeze in netback driver

 

I have had servers run Xen 2 without rebooting Dom0 for most of a year.

I am working now on a very similar machine with Xen 3.03 and Debian etch.

My network setup is slightly customized. I have 3 IP's and use network-route in xend-config.spx. I specify vif-route as the vif script in the DomU config for 2 of my VM's.

I also create a bridge in /etc/network/interfaces. I have two other VM's that only interface to this bridge and the VM's with their own IP's have a second vif on the bridge. I specify the vif-bridge script in vif config for these interfaces.

Everything boots and runs great, for several hours. Yesterday, for example, the machine was rebooted in the morning. It ran for 11 hours without errors, then the kern.log started to fill up with this message:
xen-net: memory squeeze in netback driver
This went on for 13 hours at a rate of something like 5 to 10 messages per second. Then the log shows each vif on the bridge entering disabled state. The vif's with dedicated IP's were not mentioned in the log, but this morning only the dom0 ip was responsive. When I ssh into dom0 and try to ping the IP's routed to VM's I see no route to host. I looked at route and everything was listed correctly.

I hope someone has seen a similar problem and can give me some insite. My experience is limited but to me this smells like a memory leak. Perhaps there are updates that will fix this problem but have not made it into the Debian etch packages.

Thanks,
Scott Moe

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users