[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing the CPU


  • To: Julien Grall <julien@xxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Fri, 24 Jun 2022 10:01:09 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FIW7iql8xfFig9NPdgzRhm92AswZfQJI9eKjzYf3lHg=; b=KDvCRi2eC+63gpAxrDOBOOIqx0WcgPzraa1Oi/bckTxk/Y8PwFaLevXv5KCSdT+m4c57bdeklhkXU5Goi4eyr5aJXQpw9fGmY5pUDuQpkEwotp/bt+nvqYy+As9cD1IcWfavfLwsw8i9mYgCa/hxTd0AsEnCtXvWJbK15a/QmYGo+D0VAlJF+onxamoqUHqCAbq6NBANtvlgADY9kElu2WMHeGQHKTJKS40xaMmsl2yoPklH+zutDpPw4/3C7VQBSGQ4CZ+QIwGppT5Y+uEYCkLQEfuRwbFn86CqWAvxJsvOieHCVt1yspa04OelzonHKXyNszO03ZbEl2nMRbbZ6Q==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FIW7iql8xfFig9NPdgzRhm92AswZfQJI9eKjzYf3lHg=; b=fYEAvN33+03AGt1f2hGcGGPTtM9HyFr1gOclepcGE+FYwiZdJAcb1jbn2udKxDQ6m/tOPH7T3ocZfQMn6n8DqCJCfoVpcfhjqYK1OHA04N1WTDBckUKiQh2s88T7eyDVZzXD+fabbso8FL3PGQlf+5xKVnNApJVPe3GpOcvmX7jIg0oIuOxa+mTjr41ZRsjLqHbFcAEQM4gVcTg93OUKY6HJ3/9U8m6KvppHsFEndbf2LhmVC20uRAhmX90pfOyyR0vaoM/s2fXeoOI4BEyJ205tFlXSNw9QTEdi0i1zBZtyXFLCR/mZ5DNTo6k0Op0NfG8bZEj0PgYn2d0/YKXPLA==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=NGhiUIpdBbTz2C1paNT2SovFjAZ4slfYi6v0/gi9deTr4CQ/aNG6PekVVd3evcXbr6PjGjirGah1YXkxT6HAKbL6O+U9zOE0Q1HOiMIWAgCorcBzqcJz05phQgfDRTu1ik6M076vldMmBGqwD6Es2ThdZ9cR7qmYtgqXBE173D+kyJxTuKCVw9JB1ZKMrnFCQ/9Udah7G5iZXrBvnq63y2QEE1rHk18DM10llRGieH0DFBA7VvKRAapYcBfOBC4wOsN+Uf5lJdZkdCSwHq1xvHKQI/sIDRCjoINxsHitmrBIEAYjeeJ3qqpyoFPbIkO+G9tOn9BNnWplbEL+UEVHuA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GQDiOG1x5Rbxz1P3CyF5jp7ic9jYZXxtn4vigSFDjp5etAUrkHfFzBBqcR0q/DAN9AG9oruKocHXe7awwOuBC/LjtIlyMnhKdE94MTKEFDFKbZQD73k/kyNPOqkDf6uI0qwvCS0C+7uQQ2WUeMiGYCjQGTNzXAAjJtO0ny8q0vZW0vdix9PKd3kBjqOCJ6gCGtg4bR4XE0pDqCm194hm2LP/bWvH9HaEMKnJ4Fd3gjFfDTChFifU4/kIGHrQ+E+YmXrPid0eGdaFxodZ0fNG9XCWCRBilSR3YQh0VjfEhFGhdQLkbWTWDPIQCR4o8jDtjB7VAokXF/N2FW8JatBlaw==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Julien Grall <jgrall@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Fri, 24 Jun 2022 10:01:28 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHYf9MHjBymRs/pn0u9upJvLeFm3q1eWpcAgAAIaoA=
  • Thread-topic: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing the CPU

Hi,

> On 24 Jun 2022, at 10:31, Bertrand Marquis <Bertrand.Marquis@xxxxxxx> wrote:
> 
> Hi Julien,
> 
> [OFFLIST]
> 
>> On 14 Jun 2022, at 10:41, Julien Grall <julien@xxxxxxx> wrote:
>> 
>> From: Julien Grall <jgrall@xxxxxxxxxx>
>> 
>> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
>> xmalloc()" extended the checks in _xmalloc() to catch any use of the
>> helpers from context with interrupts disabled.
>> 
>> Unfortunately, the rule is not followed when initializing the per-CPU
>> IRQs:
>> 
>> (XEN) Xen call trace:
>> (XEN) [<002389f4>] _xmalloc+0xfc/0x314 (PC)
>> (XEN) [<00000000>] 00000000 (LR)
>> (XEN) [<0021a7c4>] init_one_irq_desc+0x48/0xd0
>> (XEN) [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
>> (XEN) [<00280834>] init_secondary_IRQ+0x10/0x2c
>> (XEN) [<00288fa4>] start_secondary+0x194/0x274
>> (XEN) [<40010170>] 40010170
>> (XEN)
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 2:
>> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() 
>> <= 1)' failed at common/xmalloc_tlsf.c:601
>> (XEN) ****************************************
>> 
>> This is happening because zalloc_cpumask_var() may allocate memory
>> if NR_CPUS is > 2 * sizeof(unsigned long).
>> 
>> Avoid the problem by allocate the per-CPU IRQs while preparing the
>> CPU.
>> 
>> This also has the benefit to remove a BUG_ON() in the secondary CPU
>> code.
>> 
>> Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
> 
> I still have issues after applying this patch on qemu-arm32:
> 
> (XEN) CPU0: Guest atomics will try 1 times before pausing the domain^M^M
> (XEN) Bringing up CPU1^M^M
> (XEN) CPU1: Guest atomics will try 1 times before pausing the domain^M^M
> (XEN) Assertion 'test_bit(_IRQ_DISABLED, &desc->status)' failed at 
> arch/arm/gic.c:124^M^M
> (XEN) ----[ Xen-4.17-unstable arm32 debug=y Not tainted ]----^M^M
> (XEN) CPU: 1^M^M
> (XEN) PC: 0026f134 gic_route_irq_to_xen+0xa4/0xb0^M^M
> (XEN) CPSR: 400001da MODE:Hypervisor^M^M
> (XEN) R0: 00000120 R1: 000000a0 R2: 40002538 R3: 00000000^M^M
> (XEN) R4: 40004f00 R5: 000000a0 R6: 40002538 R7: 8000015a^M^M
> (XEN) R8: 00000000 R9: 40004f14 R10:3fe10000 R11:43fefeec R12:40002ff8^M^M
> (XEN) HYP: SP: 43fefed4 LR: 0026f0b8^M^M
> (XEN) ^M^M
> (XEN) VTCR_EL2: 00000000^M^M
> (XEN) VTTBR_EL2: 0000000000000000^M^M
> (XEN) ^M^M
> (XEN) SCTLR_EL2: 30cd187f^M^M
> (XEN) HCR_EL2: 00000038^M^M
> (XEN) TTBR0_EL2: 00000000bfffa000^M^M
> (XEN) ^M^M
> (XEN) ESR_EL2: 00000000^M^M
> (XEN) HPFAR_EL2: 00000000^M^M
> (XEN) HDFAR: 00000000^M^M
> (XEN) HIFAR: 00000000^M^M
> (XEN) ^M^M
> (XEN) Xen stack trace from sp=43fefed4:^M^M
> (XEN) 00000000 40004f00 00000000 40002538 8000015a 43feff0c 00272a4c 
> 40002538^M^M
> (XEN) 002a47c4 00000019 00000000 0026ee28 40010000 43feff2c 00272b30 
> 00309298^M^M
> (XEN) 00000001 0033b248 00000001 00000000 40010000 43feff3c 0026f7ac 
> 00000000^M^M
> (XEN) 00201828 43feff54 0027ac3c bfffa000 00000000 00000000 00000001 
> 00000000^M^M
> (XEN) 400100c0 00000000 00000000 00000000 00000000 00000000 00000000 
> 00000000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 
> 00000000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 
> 00000000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 
> 00000000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 
> 00000000^M^M
> (XEN) 00000000 00000000 00000000^M^M
> (XEN) Xen call trace:^M^M
> (XEN) [<0026f134>] gic_route_irq_to_xen+0xa4/0xb0 (PC)^M^M
> (XEN) [<0026f0b8>] gic_route_irq_to_xen+0x28/0xb0 (LR)^M^M
> (XEN) [<00272a4c>] setup_irq+0x104/0x178^M^M
> (XEN) [<00272b30>] request_irq+0x70/0xb4^M^M
> (XEN) [<0026f7ac>] init_maintenance_interrupt+0x40/0x5c^M^M
> (XEN) [<0027ac3c>] start_secondary+0x1e8/0x270^M^M
> (XEN) [<400100c0>] 400100c0^M^M
> 
> Just wanted to signal before you push this out.
> 
> I will investigate deeper and come back to you.

Pwclient did not apply the whole patch, only the smpboot part on my try run.
Re-running it applied it correctly and now my tests are passing so:

Reviewed-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
Tested-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>

Cheers
Bertrand


> 
> Cheers
> Bertrand




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.