WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Hypervisor crash(!) on xl cpupool-numa-split

To: Andre Przywara <andre.przywara@xxxxxxx>
Subject: Re: [Xen-devel] Hypervisor crash(!) on xl cpupool-numa-split
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Fri, 11 Feb 2011 07:17:28 +0100
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Diestelhorst, Stephan" <Stephan.Diestelhorst@xxxxxxx>
Delivery-date: Thu, 10 Feb 2011 22:18:26 -0800
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1297405051; x=1328941051; h=message-id:date:from:mime-version:to:cc:subject: references:in-reply-to:content-transfer-encoding; bh=s0A4J8Ben9Z7Pot5a0cDXwxbhnFgAHpJWA6i9zMKZBQ=; b=njirGVuxVlfXwWY32fcp7bW4eyvFL4IC6kVQ5fRHGUpdJLeKYY0z4bUn nQ2RYena/pAGb0waibkeiLBqe0KXhsUEXvlKNqB19rzdTPpMUEvax+NqD sXoLY1jY+14+49N7dg7Efg4Id4qH17pQ7h/VoqI/S0ibANfiee8TyEqNY T7yChATPQmKYDkKqn627P7iPJwibEc0wwhje5q+geRl4p+HouVgz/Gs0G t+EkZxM6kRYJfpMeP/qOeSS+g8E0D;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=BF4bCwgREbmLevFAe/iy4o8Pz7DPikzsLL5e3ItZYHsCaLnhZjwFpArL 7w13EtoOOwb5YQFVOCnlW75ZjnifTeigHha7mh+gYGyjLnPPfEb5PlaR0 yaBABhLN3S38zl+5B97RsX0hg1UZa/M6JOHTT7x54gqIXkmquyBltgLBV iEVsuPNlV7CNWXbp3RIhfVfcSEErndBTeCxVTpLq0M2i1H/IC0p/azVGp /HSG5e7bIK60veGvslfRlS5SSFS0H;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4D53F3BC.4070807@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <4D41FD3A.5090506@xxxxxxx> <201102021539.06664.stephan.diestelhorst@xxxxxxx> <4D4974D1.1080503@xxxxxxxxxxxxxx> <201102021701.05665.stephan.diestelhorst@xxxxxxx> <4D4A43B7.5040707@xxxxxxxxxxxxxx> <4D4A72D8.3020502@xxxxxxxxxxxxxx> <4D4C08B6.30600@xxxxxxx> <4D4FE7E2.9070605@xxxxxxx> <4D4FF452.6060508@xxxxxxxxxxxxxx> <AANLkTinoRUQC_suVYFM9-x3D00KvYofq3R=XkCQUj6RP@xxxxxxxxxxxxxx> <4D50D80F.9000007@xxxxxxxxxxxxxx> <AANLkTinKJUAXhiXpKui_XX8XCD6T5fmzNARwHE6Fjafv@xxxxxxxxxxxxxx> <AANLkTinP0z9GynF1RFd8RwzWuqvxYdb+UBE+7xKpX6D4@xxxxxxxxxxxxxx> <4D517051.10402@xxxxxxx> <AANLkTi=MiELBnPFvb6-jzVth+T7aKxP5JMFhVh3Crdmo@xxxxxxxxxxxxxx> <AANLkTikgGNz=imS1xRVVjntY5P=+MuT_Qsb=-h3QHajY@xxxxxxxxxxxxxx> <4D529BD9.5050200@xxxxxxx> <4D52A2CD.9090507@xxxxxxxxxxxxxx> <4D5388DF.8040900@xxxxxxxxxxxxxx> <4D53AF27.7030909@xxxxxxx> <4D53F3BC.4070807@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.16) Gecko/20101226 Iceowl/1.0b1 Icedove/3.0.11
On 02/10/11 15:18, Andre Przywara wrote:
Andre Przywara wrote:
On 02/10/2011 07:42 AM, Juergen Gross wrote:
On 02/09/11 15:21, Juergen Gross wrote:
Andre, George,


What seems to be interesting: I think the problem did always occur when
a new cpupool was created and the first cpu was moved to it.

I think my previous assumption regarding the master_ticker was not
too bad.
I think somehow the master_ticker of the new cpupool is becoming active
before the scheduler is really initialized properly. This could
happen, if
enough time is spent between alloc_pdata for the cpu to be moved and
the
critical section in schedule_cpu_switch().

The solution should be to activate the timers only if the scheduler is
ready for them.

George, do you think the master_ticker should be stopped in
suspend_ticker
as well? I still see potential problems for entering deep C-States.
I think
I'll prepare a patch which will keep the master_ticker active for the
C-State case and migrate it for the schedule_cpu_switch() case.
Okay, here is a patch for this. It ran on my 4-core machine without any
problems.
Andre, could you give it a try?
Did, but unfortunately it crashed as always. Tried twice and made sure
I booted the right kernel. Sorry.
The idea with the race between the timer and the state changing
sounded very appealing, actually that was suspicious to me from the
beginning.

I will add some code to dump the state of all cpupools to the BUG_ON
to see in which situation we are when the bug triggers.
OK, here is a first try of this, the patch iterates over all CPU pools
and outputs some data if the BUG_ON
((sdom->weight * sdom->active_vcpu_count) > weight_left) condition
triggers:
(XEN) CPU pool #0: 1 domains (SMP Credit Scheduler), mask: fffffffc003f
(XEN) CPU pool #1: 0 domains (SMP Credit Scheduler), mask: fc0
(XEN) CPU pool #2: 0 domains (SMP Credit Scheduler), mask: 1000
(XEN) Xen BUG at sched_credit.c:1010
....
The masks look proper (6 cores per node), the bug triggers when the
first CPU is about to be(?) inserted.

Sure? I'm missing the cpu with mask 2000.
I'll try to reproduce the problem on a larger machine here (24 cores, 4 numa
nodes).
Andre, can you give me your xen boot parameters? Which xen changeset are you
running, and do you have any additional patches in use?


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>