[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86: avoid wrong use of all-but-self IPI shorthand


  • To: Andrew Cooper <amc96@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 9 Dec 2021 08:42:16 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=afy0kI54gIUjviHx00A3H7rP+W4FLZMtjjoh+bwDsJQ=; b=ENZndmKYEiFbYFVg5UOGybi3FAODoCqf7I+k1IsddLabKLdgbyn0um3BjDqNt6KUX2aQd1PMvtkuqK7YTc2w7i/y8HIMGGxgv4qGDUQ5znhn0QyZnmrsiMTNv3CZzdXgHQw7iV+iuLLD2VH4txdxpLM4QjyvGgUru/jMMbwDYL5ICnoW4wbQ914pkKrph/EbBAZ+OdUpTIuxh6BMSuPuB1k1oSUs8F2ikKPeQHik8EV4okm7kBHYeylQjd+5dBCdBn/OWkaRgdI/3Z9at6jzYOTSvlurqUq1JCCzVDs1JsBbexLaKmKphHouhK1eXqTlWpkURPwIXNnfUSxr2bXddg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IyptaMZyYwdRRjo8CerDLvM5JteCBQw6LRsowUNEQta7CFlFiFMZi948LRVAxeFAEEeuXFqWClOXFCF+t8AdPD3wNo/BC7JcqZOMHJFmkyROBTVfcsL5QFc/DhCsy0XrqBEHepiXMunbVpsO6fSdTo/P7ZDAPU7nAODG4dR7QTgan9L0fedUb+EMr73s2e6NWkQLJvj6Salh3+ABSN5nG8r5c3GIc60EXfq4h7BMWaNVXB+HOxb4raDzUr6dPTLoW7jH+l0T3If37DAVWprxAFOxXbh6FTHqTbLktvQm5oEdeQwouwdX7BfYTP8BLiag63gtSMtE8wZk9oK6pfYgOA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 09 Dec 2021 07:42:32 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 08.12.2021 15:16, Andrew Cooper wrote:
> On 08/12/2021 11:47, Jan Beulich wrote:
>> With "nosmp" I did observe a flood of "APIC error on CPU0: 04(04), Send
>> accept error" log messages on an AMD system. And rightly so - nothing
>> excludes the use of the shorthand in send_IPI_mask() in this case. Set
>> "unaccounted_cpus" to "true" also when command line restrictions are the
>> cause.
>>
>> Note that PV-shim mode is unaffected by this change, first and foremost
>> because "nosmp" and "maxcpus=" are ignored in this case.
>>
>> Fixes: 5500d265a2a8 ("x86/smp: use APIC ALLBUT destination shorthand when 
>> possible")
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

Thanks.

>> ---
>> While in "nosmp" mode it's probably benign that we switch to the bigsmp
>> APIC driver simply because there are more than 8 physical CPUs, I
>> suppose that's inefficient when "maxcpus=" with a value between 2 and 8
>> (inclusive) is in use. Question is whether that's worthwhile to find a
>> solution for.
> 
> Honestly, the concept of "nosmp" needs deleting.  We inherited it from
> Linux and it wasn't terribly appropriate even back then.
> 
> Nowadays, even if we happen to boot with 1 cpu, there are normal things
> we talk to (the IOMMUs most obviously) which are smp-like.
> 
> 
> None of these command line restricted settings can be used in
> production, because neither Intel nor AMD support, and both require us
> to boot all logical processors.  Everything playing in this area is a
> maintenance burden only.

But you realize that "nosmp" (nowadays at least) is merely a shorthand for
"maxcpus=1"? I don't think you mean to suggest to delete that option too?
What we did remove long ago, matching what you say, was CONFIG_SMP.

One aspect of my consideration, which I realize only now, would be that
then we'd have a way to test "flat" mode even on larger systems. This may
be relevant with there being less and less systems with no more than 8
CPUs (threads), and hence that mode probably already hasn't been tested
much.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.