WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Maximum number of domains and NR_IRQS

To: "Keir Fraser" <keir@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Maximum number of domains and NR_IRQS
From: "Carb, Brian A" <Brian.Carb@xxxxxxxxxx>
Date: Mon, 11 Dec 2006 16:06:51 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 11 Dec 2006 13:07:01 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C19DE8D1.5AE8%keir@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AccaFYwFy8r1fHrHTUe4UDnJ0i4NHgAAeZuMANMhU4A=
Thread-topic: [Xen-devel] Maximum number of domains and NR_IRQS
Keir,
 
We pulled and rebuilt a new xen-unstable (changeset 12895) which includes your patch 12790 for the out-of-IRQs condition. Now we are able to start 121 domus - when we start the 122nd, dom0 does not crash - instead, the 'xm create' fails with the error:
 
  Error: Device 0 (vif) could not be connected. Hotplug scripts not working.

and in the serial console we see the following error:
 
Unable to handle kernel paging request at 0000000380435c50 RIP:
<ffffffff8023f634>{unbind_from_irq+35}
PGD 15c3c067 PUD 0
Oops: 0000 [2] SMP
CPU 7
Modules linked in: xt_tcpudp xt_physdev iptable_filter ip_tables x_tables bridge dm_round_robin dm_emc ipv6 nfs lockd nfs_acl sunrpc dm_multipath button battery ac dm_mod e1000 ext3 jbd reiserfs fan thermal processor sg lpfc scsi_transport_fc mptsas mptscsih mptbase scsi_transport_sas piix sd_mod scsi_mod
Pid: 36, comm: xenwatch Tainted: GF     2.6.16.33-xen #1
RIP: e030:[<ffffffff8023f634>] <ffffffff8023f634>{unbind_from_irq+35}
RSP: e02b:ffff880000b09d48  EFLAGS: 00010246
RAX: 00000000ffffffe4 RBX: 000000001f4958c8 RCX: ffffffff80310d06
RDX: 0000000000000000 RSI: ffffffff80248b4b RDI: ffffffff8036a0c0
RBP: 00000000ffffffe4 R08: ffff880016e6a368 R09: ffff88001cf2ebc0
R10: 0000000000000007 R11: 0000000000000020 R12: ffffffffffffffe4
R13: ffff880016e6a368 R14: ffffffff80310d06 R15: 0000000000000000
FS:  00002b149f6efc90(0000) GS:ffffffff803ae380(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000
Process xenwatch (pid: 36, threadinfo ffff880000b08000, task ffff880000b40820)
Stack: 0000000000000008 ffff880016e6a368 0000000000000000 00000000fffffff4
       ffffffff80310d06 00000000ffffffe4 00000000000001fc 00000000000001fc
       00000000ffffffe4 00000000ffffffea
Call Trace: <ffffffff8023fa80>{bind_evtchn_to_irqhandler+144}
       <ffffffff80248b4b>{blkif_be_int+0} <ffffffff801431ee>{keventd_create_kthread+0}
       <ffffffff8024a3ac>{blkif_map+425} <ffffffff80249b4d>{frontend_changed+207}
       <ffffffff80245a66>{xenwatch_thread+0} <ffffffff8024507a>{xenwatch_handle_callback+21}
       <ffffffff80245ba7>{xenwatch_thread+321} <ffffffff801431ee>{keventd_create_kthread+0}
       <ffffffff801435f5>{autoremove_wake_function+0} <ffffffff801431ee>{keventd_create_kthread+0}
       <ffffffff80245a66>{xenwatch_thread+0} <ffffffff801434bb>{kthread+212}
       <ffffffff8010bdee>{child_rip+8} <ffffffff801431ee>{keventd_create_kthread+0}
       <ffffffff801433e7>{kthread+0} <ffffffff8010bde6>{child_rip+0}
 
Code: 8b 14 85 c0 5c 43 80 ff ca 85 d2 89 14 85 c0 5c 43 80 0f 85
RIP <ffffffff8023f634>{unbind_from_irq+35} RSP <ffff880000b09d48>
CR2: 0000000380435c50

brian carb
unisys corporation - malvern, pa
brian.carb@xxxxxxxxxx

 


From: Keir Fraser [mailto:keir@xxxxxxxxxxxxx]
Sent: Thursday, December 07, 2006 10:51 AM
To: Carb, Brian A; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Maximum number of domains and NR_IRQS

On 7/12/06 15:37, "Carb, Brian A" <Brian.Carb@xxxxxxxxxx> wrote:

We successfully start 118 DOMUs, but when we try to start the 119th, the system panics with the following messages:
  Kernel panic - not syncing: No available IRQ to bind to: increase NR_IRQS!
  (XEN) Domain 0 crashed: rebooting machine in 5 seconds.

The documentation in include/asm-x86_64/irq.h suggests that the value of NR_IRQS under x86_64 is limited to 256. In fact, when we rebuilt xen-unstable with NR_IRQS set to 768, the kernel panics on boot (see below).


It’s not Xen’s NR_IRQS you should increase; only Linux’s.

This out-of-IRQs condition shouldn’t crash the dom0 of course. I’ll look into that.

 -- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel