[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] x86/flushtlb: remove flush_area check on system state


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 25 May 2022 09:21:06 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZC01VVS1t9xrA2x0htIo3j/g4vkU3ZSRw9LLcQir1ps=; b=J+5qGK6QsNCSYCZzFfGJDL/J+zAVR8ZL18CpxAKI0yWth9MLJyHR+f+nYPZkqKwLeq+GY9iiqNFNVjHNasVn/ZKSxVvEQTu/ZZnD+86DMvggsElIdpoBHOgAq25M1ahZLsbt8Z1jZRoJApTEhv6mt2TLxTRsP7kS8OkO5bRNXRf9d+IsvReGAOlaDCSPa+5YuCsQO6BdQWP0jIQ60bR7IS4P8yVR4Y54u4HCbkeCREJdA00U6gfeYNcOwvS579EvCyZJwllCL5CcoTePoLaZr75ZO+ouO1CjOwOerzZW3G62mSy9zUswD+HCkthvtKlqCgMR7FZiZXk9j2PAIXNs6g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TOOVSITyhiNrLlf9CH6kz8gwZwilJgIK8zbgieQ+TyQ+GT4qoQNkc6OxsvTDS5NVjIAfZuF3eQMCFlwGpS163tONFvShpLY6F/jebcaMQzb/2iuIsrUbrpixG1R0msbyv+bdaWe6Vrobj0c4LPTixaa7ITvVgxy3TczSpEFuyAswFEkZLUXZ/rxJw6HeWGicmpVJE+bsyB0wJscqiCElY4BXJXl5PkGPFI53uTerbM1eHyQe6iWpSJEgLgD0LqWlJ15M/LvL3eMNNoaH6r37vYTSMvpm7R/r6L+DYl82vgbG9DOx5JG0CxI7YN9KqvbalumGM0DO1DT+lX15Vj8iZw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 25 May 2022 07:21:43 +0000
  • Ironport-data: A9a23:zg0u56JmJDkJHkdtFE+RpZQlxSXFcZb7ZxGr2PjKsXjdYENS02RSz GoXDWCHPvaKMGr8etsjOoqy9ENQvJGAzoAxHAdlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh39Q12YHR7z6l4 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4 IVsu4WLaFgKBPHVk7UWfj9mARFSMKITrdcrIVDn2SCS52vvViK1ht5JVQQxN4Be/ftrC2ZT8 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHM6FGvqWjTNb9G5YasRmB/HRa tBfcTNyRB/BfwdOKhEcD5dWcOKA2SOmKWwC9gv9Sawf3U/5zxQh85rXNcuOQPqxXddplHTfj zeTl4j+KlRAXDCF8hKH+H+xgu7EnQvgRZkfUra/85ZCn1m71mEVThoMWjOTsfS/z0KzRd9bA 0gV4TY167g/8lSxSdvwVAH+p2SL1iPwQPJVGuw+rQqKk6zd5l/AAnBeF2EdLts7qMUxWDomk EeTmM/kDiBut7vTTm+B8rCTrnW5Pi19wXI+WBLohDAtu7HLyLzfRDqUJjq/OMZZVuHIJAw=
  • Ironport-hdrordr: A9a23:e9ONcqGdulAhouVgpLqFepHXdLJyesId70hD6qkvc3Fom52j/f xGws5x6faVslkssb8b6LW90Y27MAvhHPlOkPIs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79 YbT0EcMqyOMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPH1XgspbnmNE42igYy9LrF4sP+tFKH PQ3LsPmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzoq8hOHgz+E4KPzV0Hw5GZUbxp/hZMZtU TVmQ3w4auu99m91x/nzmfWq7BbgsHoxNdvDNGFzuIVNjLvoAC1Y5kJYczLgBkF5MWUrHo6mt jFpBkte+x19nPqZ2mw5SDg3gHxuQxen0PK+Bu9uz/OsMb5TDU1B45qnoRCaCbU7EImoZVVzL 9L93jxjesZMTrw2ADGo/TYXRBjkUS55VA4l/QIsnBZWYwCLJdMsI0k+l9PGptoJlO31GkeKp guMCjg3ocXTbvDBEqp/VWHgebcE0jbJy32DHTr4aeuonprdHMQ9Tps+CVQpAZEyHsHceg02w 31CNUXqFhwdL5nUUtcPpZ3fSLlMB26ffrzWFjiUmjPJeUgB0/njaLRzfEc2NyKEaZ4vqfa3q 6xGm9liQ==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, May 25, 2022 at 08:02:17AM +0200, Jan Beulich wrote:
> On 24.05.2022 18:46, Roger Pau Monné wrote:
> > On Tue, May 24, 2022 at 05:27:35PM +0200, Jan Beulich wrote:
> >> On 24.05.2022 12:50, Roger Pau Monne wrote:
> >>> Booting with Shadow Stacks leads to the following assert on a debug
> >>> hypervisor:
> >>>
> >>> Assertion 'local_irq_is_enabled()' failed at arch/x86/smp.c:265
> >>> ----[ Xen-4.17.0-10.24-d  x86_64  debug=y  Not tainted ]----
> >>> CPU:    0
> >>> RIP:    e008:[<ffff82d040345300>] flush_area_mask+0x40/0x13e
> >>> [...]
> >>> Xen call trace:
> >>>    [<ffff82d040345300>] R flush_area_mask+0x40/0x13e
> >>>    [<ffff82d040338a40>] F modify_xen_mappings+0xc5/0x958
> >>>    [<ffff82d0404474f9>] F 
> >>> arch/x86/alternative.c#_alternative_instructions+0xb7/0xb9
> >>>    [<ffff82d0404476cc>] F alternative_branches+0xf/0x12
> >>>    [<ffff82d04044e37d>] F __start_xen+0x1ef4/0x2776
> >>>    [<ffff82d040203344>] F __high_start+0x94/0xa0
> >>>
> >>>
> >>> This is due to SYS_STATE_smp_boot being set before calling
> >>> alternative_branches(), and the flush in modify_xen_mappings() then
> >>> using flush_area_all() with interrupts disabled.  Note that
> >>> alternative_branches() is called before APs are started, so the flush
> >>> must be a local one (and indeed the cpumask passed to
> >>> flush_area_mask() just contains one CPU).
> >>>
> >>> Take the opportunity to simplify a bit the logic and intorduce
> >>> flush_area_all() as an alias for flush_area_mask(&cpu_online_map...),
> >>
> >> This is now stale - you don't introduce flush_area_all() here.
> >> Sadly nothing is said to justify the addition of a cast there,
> >> which - as said before - I think is a little risky (as many
> >> casts are), and hence would imo better be avoided.
> > 
> > So prior to this change there are no direct callers to
> > flush_area_all(), and hence all callers use flush_area() which has the
> > cast.  Now that I remove flush_area() and modify callers to use
> > flush_area_all() directly it seems natural to also move the cast
> > there.  While I agree that having casts is not desirable, I wouldn't
> > consider this change as adding them.  Merely moving them but the
> > result is that the callers get the cast like they used to do.
> 
> I'd agree with all of this if the change was local to mm.c. As I'd
> like to see the macro in flushtlb.h left unchanged, did you consider
> retaining flush_area() as a wrapper in mm.c, reduced to merely
> invoking flush_area_all() with the cast added? That would also
> reduce the code churn of the patch.

Hm, yes, didn't consider this, but could do.  I didn't want to keep
flush_area() globally, but adding to mm.c only could be OK in order to
limit the cast.

> >>> --- a/xen/arch/x86/smp.c
> >>> +++ b/xen/arch/x86/smp.c
> >>> @@ -262,7 +262,10 @@ void flush_area_mask(const cpumask_t *mask, const 
> >>> void *va, unsigned int flags)
> >>>  {
> >>>      unsigned int cpu = smp_processor_id();
> >>>  
> >>> -    ASSERT(local_irq_is_enabled());
> >>> +    /* Local flushes can be performed with interrupts disabled. */
> >>> +    ASSERT(local_irq_is_enabled() || cpumask_subset(mask, 
> >>> cpumask_of(cpu)));
> >>> +    /* Exclude use of FLUSH_VCPU_STATE for the local CPU. */
> >>> +    ASSERT(!cpumask_test_cpu(cpu, mask) || !(flags & FLUSH_VCPU_STATE));
> >>
> >> What about FLUSH_FORCE_IPI? This won't work either with IRQs off,
> >> I'm afraid. Or wait - that flag's name doesn't really look to
> >> force the use of an IPI, it's still constrained to remote
> >> requests. I think this wants mentioning in one of the comments,
> >> not the least to also have grep match there then (right now grep
> >> output gives the impression as if the flag wasn't consumed
> >> anywhere).
> > 
> > Would you be fine with adding:
> > 
> > Note that FLUSH_FORCE_IPI doesn't need to be handled explicitly, as
> > it's main purpose is to prevent the usage of the hypervisor assisted
> > flush if available, not to force the sending of an IPI even for cases
> > where it won't be sent.
> 
> Hmm, yes, that's even more verbose than I would have expected it to
> be. Just one point: I'm not sure about "main" there. Is there really
> another purpose?

Right, I should remove main.

> Of course an alternative would be to rename the flag to properly
> express what it's for (e.g. FLUSH_NO_HV_ASSIST). This would then
> eliminate the need for a comment, afaic at least.

I think it's likely that we also require this flag if we make use of
hardware assisted flushes in the future, and hence it would better
stay with the current name to avoid renaming in the future.

Whether the avoidance of sending the IPI is due to hardware or
hypervisor assistance is of no interest to the caller, it only cares
to force a real IPI to be sent to remote processors.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.