[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/flushtlb: remove flush_area check on system state


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 23 May 2022 18:24:48 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XDo5fc7VXEjbUfTZbfKOT5Mxq0ouupMZ5lhrcZpiRBU=; b=aUhBT0rbffOh0ygZ2TtnQTweJ4e93Q1sCGEG+rlBu5AYVwN7oati0zi3zkkAor+bhKQD6V85lVwPF3oTbfzmeaKBfex2+xt5EHE/mRrGVWnb+XUzv07pCPH6AxSqS7V4pk83bOAjBhMJAxGA04p0pqV94VFgKOJOLBYnZdGpXpBSmXS3MB4OlJoWBq82uyhl/AmgnmNxoLzxGz6152JmzpRECewnhicDl1gQnhSkeReOFzGy2osZWjTae2HVPykgUgzsv/29TFmTO8uS54W07OFZPPeb4jI4pf8v5mnxHsEj4TuBUSNjUZzjUFPuq2jnz2cEmDqpkYnkrjtsGmgMtA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NGQpWWFnnDN0UWnKigWniGC8PCHxJXuaNT2fgZ2MOsHtLGjO2lXhCrsP6+8bVilBoI/DOCrQC1Thzr4YzjhATdJvg9BRnp0OpIkIlW93u/AkwgVvmG0NaxRARi46CgrR9kVR4ctF5QG9uJjM4Hkork48mXQtTICLEhzi/6VX4sOMnyCETD7OHuu80xckY7ko8TWK1p1LmelgoTLxGR8LsOJcbj+NLuo8FJsa5+kXwnB9F5mXyuxlYxYiz1c5eobOFKgelO6ZJAWIQ8c/75oJp3d20ZIz6IYWEwBwhnYvhxr1GWTVbJrAhlCHIEmOR/LWvRBwfZ6WqjEPPDl3LLKMow==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 24 May 2022 07:42:05 +0000
  • Ironport-data: A9a23:rERlL6DB/i6y7hVW/13iw5YqxClBgxIJ4kV8jS/XYbTApGkqgzIPz mdMD2yHaP2KMGv3f4ogPY3j/R4Eu5WEz99qQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgHGeIdA970Ug5w7Nj2dYx6TSEK1jlV e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPg2m MQTkpy+bz1qFYmTut9ASzhWLgFXaPguFL/veRBTsOS15mifKz7J/K8rC0s7e4oF5uxwHGdCs +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6t7B8mbE80m5vcBtNs0rtpJEvvEI dIQdBJkbQjaYg0JMVASYH47tLjx1yalK2cFwL6TjY50xDbh7gAh657sd8XzS9ygeNhJl0nN8 woq+Ey8WHn2Lue3yzCI73atje/nhj7gVcQZE7jQ3uFuqE2ewCoUEhJ+fUu2p7y1h1CzX/pbK lcI4Ww+oK4q7kupQ9LhGRqirxa5UgU0XtNRF6g/91uLw6+NuwKBXDFcHnhGdcAss9IwSXoyz FiVktj1BDtp9rqIVXaa8bTSpjS3UcQIEVI/ieY/ZVNty7HeTEsb13ojkv4L/HaJs+DI
  • Ironport-hdrordr: A9a23:saGVOq/fJlA5yDunvMVuk+FKdb1zdoMgy1knxilNoENuH/Bwxv rFoB1E73TJYVYqN03IV+rwXZVoZUmsjaKdhrNhRotKPTOWwVdASbsP0WKM+V3d8kHFh41gPO JbAtJD4b7LfCdHZKTBkW6F+r8bqbHokZxAx92uqUuFJTsaF52IhD0JbjpzfHcGJjWvUvECZe ehD4d81nOdUEVSSv7+KmgOXuDFqdGOvJX6YSQeDxpizAWVlzun5JPzDhDdh34lInhy6IZn1V KAvx3y562lvf3+4hjA11XL55ATvNf60NNMCOGFl8BQADTxjQSDYphnRtS5zXgIidDqzGxvvM jHoh8mMcg2w3TNflutqR+o4AXk2CZG0Q6X9XaoxV/Y5eDpTjMzDMRMwahDdAHC1kYmtNZglI pWwmOwrfNsfFz9tRW4w+KNewBhl0Kyr3Znu/UUlWZjXYwXb6IUhZAD/XlSDIwLEEvBmcwa+d FVfYDhDcttABOnhyizhBgt/DXsZAV/Iv6+eDlNhiTPuAIm3kyQzCMjtbkidzk7hdcAoqJ/lp X525RT5c9zp/AtHNJA7cc6MLyK4z/2MGTx2Fz7GyWVKIg3f1TwlrXQ3JIZoMmXRb1g9upBpH 2GaiITiVIP
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, May 23, 2022 at 05:13:43PM +0200, Jan Beulich wrote:
> On 23.05.2022 16:37, Roger Pau Monné wrote:
> > On Wed, May 18, 2022 at 10:49:22AM +0200, Jan Beulich wrote:
> >> On 16.05.2022 16:31, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/include/asm/flushtlb.h
> >>> +++ b/xen/arch/x86/include/asm/flushtlb.h
> >>> @@ -146,7 +146,8 @@ void flush_area_mask(const cpumask_t *, const void 
> >>> *va, unsigned int flags);
> >>>  #define flush_mask(mask, flags) flush_area_mask(mask, NULL, flags)
> >>>  
> >>>  /* Flush all CPUs' TLBs/caches */
> >>> -#define flush_area_all(va, flags) flush_area_mask(&cpu_online_map, va, 
> >>> flags)
> >>> +#define flush_area(va, flags) \
> >>> +    flush_area_mask(&cpu_online_map, (const void *)(va), flags)
> >>
> >> I have to admit that I would prefer if we kept the "_all" name suffix,
> >> to continue to clearly express the scope of the flush. I'm also not
> >> really happy to see the cast being added globally now.
> > 
> > But there where no direct callers of flush_area_all(), so the name was
> > just relevant for it's use in flush_area().  With that now gone I
> > don't see a need for a flush_area_all(), as flush_area_mask() is more
> > appropriate.
> 
> And flush_area_all() is shorthand for flush_area_mask(&cpu_online_map, ...).
> That's more clearly distinguished from flush_area_local() than simply
> flush_area(); the latter was okay-ish with its mm.c-only exposure, but imo
> isn't anymore when put in a header.

OK, so you would prefer to replace callers to use flush_area_all() and
drop flush_area() altogether.  I can do that.

> >>> --- a/xen/arch/x86/smp.c
> >>> +++ b/xen/arch/x86/smp.c
> >>> @@ -262,7 +262,8 @@ void flush_area_mask(const cpumask_t *mask, const 
> >>> void *va, unsigned int flags)
> >>>  {
> >>>      unsigned int cpu = smp_processor_id();
> >>>  
> >>> -    ASSERT(local_irq_is_enabled());
> >>> +    /* Local flushes can be performed with interrupts disabled. */
> >>> +    ASSERT(local_irq_is_enabled() || cpumask_equal(mask, 
> >>> cpumask_of(cpu)));
> >>
> >> Further down we use cpumask_subset(mask, cpumask_of(cpu)),
> >> apparently to also cover the case where mask is empty. I think
> >> you want to do so here as well.
> > 
> > Hm, yes.  I guess that's cheaper than adding an extra:
> > 
> > if ( cpumask_empty() )
> >     return;
> > 
> > check at the start of the function.
> > 
> >>>      if ( (flags & ~(FLUSH_VCPU_STATE | FLUSH_ORDER_MASK)) &&
> >>>           cpumask_test_cpu(cpu, mask) )
> >>
> >> I suppose we want a further precaution here: Despite the
> >> !cpumask_subset(mask, cpumask_of(cpu)) below I think we want to
> >> extend what c64bf2d2a625 ("x86: make CPU state flush requests
> >> explicit") and later changes (isolating uses of FLUSH_VCPU_STATE
> >> from other FLUSH_*) did and exclude the use of FLUSH_VCPU_STATE
> >> for the local CPU altogether.
> > 
> > If we really want to exclude the use of FLUSH_VCPU_STATE for the local
> > CPU, we might wish to add this as a separate ASSERT, so that such
> > checking doesn't depend on !local_irq_is_enabled():
> > 
> > ASSERT(local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu));
> > ASSERT(!cpumask_subset(mask, cpumask_of(cpu)) || !(flags & 
> > FLUSH_VCPU_STATE));
> > 
> > 
> >> That's because if such somehow made
> >> it into the conditional below here, it would still involve an IPI.
> > 
> > Sorry, I'm confused by this: if the mask is empty there should be no
> > IPI involved at all?  And we shouldn't even get into the second
> > conditional on the function.
> 
> Should perhaps have made more explicit that "somehow" means a hypothetical
> way, perhaps even as a result of some further breakage somewhere.

Oh, OK, then I wasn't so confused after all :).  Given your lack of
comments I assume you are fine with the addition of a separate ASSERT
to cover the usage of FLUSH_VCPU_STATE.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.