[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/flushtlb: remove flush_area check on system state


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 24 May 2022 12:48:51 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZkkL8uFAuJ6LjMwJxMrbILLKOgiqgw1iWo7SKl6Jy/U=; b=DVSok2xyoH3GRNA5Nsm/dsWA6rsUktd8KKulXKTU8HoRUKG5ocTuv6/sCEd1smI4wfNO40DKCXv2Hqk3HG7+FgGuPGxyv9GVxRs98r0N7yssorB0H6kXAu7CbK5hAI51oRmox0QiJB9rJn37sqHG33JjVbA9lRu4dJ6/hwhw5NuzzN4efeTKavMeRRC/ln0oLfl5Biru1CKf4F89s1zEjIZkih3qfvOJ0JD2ZqbLhx1+7IpeCG7aUU77iI++OTvExUv/mRZGSi7RBcRDhkiyDZvHeIZ9rOgUuePn14U7PKrtxcmITvQtZSm+HBB6x1Xmuv0YUQQY4LzshPUnJqRlUw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Lmemz62kWwu9+lRpPU0fvE3q8fxVBXSb6U2yTbpTsImKBSB63oxo5pxVHoPL4R9B8wxjiLfYD3whOf7a9qQp+lSVmRUi2ivJhKJrss1fIqitaxY+UGXDe4a1OIJOdo6iHi5yJAx4t0opWGrEq7Ot2l0QxUf1gw1S1CJu09DWTFgW0glKR4LeLyld75/b3c49eFUon3cK7bzBtgy0De4cug0t4vjQcC6o2RThMvuZLQXJUMjsktZMQtWjZE0gYZHYki19al0PZ+83q331Ip/P0Q8xkXDGTAYo8qoF36UnTvsBG03o36GCipwb56wOXLH0ezkKLzRDfDpnXUqQ3sghOw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 24 May 2022 10:49:10 +0000
  • Ironport-data: A9a23:P5zaLK7emVpsYGy0/5zMigxRtEzGchMFZxGqfqrLsTDasY5as4F+v jYbXG/TbPjZM2SjLY8jaIuxo04EvZGDm9BmSQc5rC1jHi5G8cbLO4+Ufxz6V8+wwmwvb67FA +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw0HqPp8Zj2tQy2YXjX1vU0 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx umhurS2YloDOYGSnd4RCTZiEypsFIJX2+/IdC3XXcy7lyUqclPK6tA3VgQcG91d/ex6R2ZT6 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiXo4YHgF/chegXdRraT 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2IE9Q/O+/ppi4TV5BFf2eXHN8TzQOGDAuRvo0qDu mnbo3usV3n2M/Tak1Jp6EmEhOXCgCf6U4I6D6Cj+7hhh1j77nMXIA0bUx28u/bRol6zXZdTJ lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJt//YS7QiMzu/e5VafD21dFDpZMoV56IkxWCAg0 UKPk5XxHztzvbaJSHWbsLCJsTe1PitTJmgHDcMZcTY4DxDYiNlbpnryohxLScZZUvWd9enM/ g23
  • Ironport-hdrordr: A9a23:KIAjpaM4bcLGecBcT1P155DYdb4zR+YMi2TDiHoddfUFSKalfp 6V98jztSWatN/eYgBEpTmlAtj5fZq6z+8P3WBxB8baYOCCggeVxe5ZjbcKrweQeBEWs9Qtr5 uIEJIOd+EYb2IK6voSiTPQe7hA/DDEytHPuQ639QYQcegAUdAF0+4WMHf4LqUgLzM2eKbRWa Dsr/Zvln6FQzA6f867Dn4KU6zqoMDKrovvZVojCwQ84AeDoDu04PqieiLolSs2Yndq+/MP4G LFmwv26uGKtOy68AbV0yv2445NkNXs59NfDIini9QTKB/rlgG0Db4RE4GqjXQQmqWC+VwqmN 7Dr1MJONly0WrYeiWPrR7ky2DboUITwk6n7WXdrWrooMT/Sj5/IdFGn5hlfhzQ7FdllM1g0Y pQtljp+KZ/PFflpmDQ9tLIXxZlmg6funw5i9MeiHRZTM83dKJRl4oC50lYea1wUB4S0LpXUd WGMfuspMq/KTihHjPkVyhUsZGRt00Ib1m7qhNogL3W79BU9EoJunfwivZv20voz6hNOqWs19 60TJiAq4s+PvP+TZgNc9vpEvHHfFAkf3r3QRGvCGWiMp07EFTwjLOyyIkJxYiRCe41Jd0J6d 78bG8=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, May 23, 2022 at 06:24:48PM +0200, Roger Pau Monné wrote:
> On Mon, May 23, 2022 at 05:13:43PM +0200, Jan Beulich wrote:
> > On 23.05.2022 16:37, Roger Pau Monné wrote:
> > > On Wed, May 18, 2022 at 10:49:22AM +0200, Jan Beulich wrote:
> > >> On 16.05.2022 16:31, Roger Pau Monne wrote:
> > >>> --- a/xen/arch/x86/smp.c
> > >>> +++ b/xen/arch/x86/smp.c
> > >>> @@ -262,7 +262,8 @@ void flush_area_mask(const cpumask_t *mask, const 
> > >>> void *va, unsigned int flags)
> > >>>  {
> > >>>      unsigned int cpu = smp_processor_id();
> > >>>  
> > >>> -    ASSERT(local_irq_is_enabled());
> > >>> +    /* Local flushes can be performed with interrupts disabled. */
> > >>> +    ASSERT(local_irq_is_enabled() || cpumask_equal(mask, 
> > >>> cpumask_of(cpu)));
> > >>
> > >> Further down we use cpumask_subset(mask, cpumask_of(cpu)),
> > >> apparently to also cover the case where mask is empty. I think
> > >> you want to do so here as well.
> > > 
> > > Hm, yes.  I guess that's cheaper than adding an extra:
> > > 
> > > if ( cpumask_empty() )
> > >     return;
> > > 
> > > check at the start of the function.
> > > 
> > >>>      if ( (flags & ~(FLUSH_VCPU_STATE | FLUSH_ORDER_MASK)) &&
> > >>>           cpumask_test_cpu(cpu, mask) )
> > >>
> > >> I suppose we want a further precaution here: Despite the
> > >> !cpumask_subset(mask, cpumask_of(cpu)) below I think we want to
> > >> extend what c64bf2d2a625 ("x86: make CPU state flush requests
> > >> explicit") and later changes (isolating uses of FLUSH_VCPU_STATE
> > >> from other FLUSH_*) did and exclude the use of FLUSH_VCPU_STATE
> > >> for the local CPU altogether.
> > > 
> > > If we really want to exclude the use of FLUSH_VCPU_STATE for the local
> > > CPU, we might wish to add this as a separate ASSERT, so that such
> > > checking doesn't depend on !local_irq_is_enabled():
> > > 
> > > ASSERT(local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu));
> > > ASSERT(!cpumask_subset(mask, cpumask_of(cpu)) || !(flags & 
> > > FLUSH_VCPU_STATE));

Actually, it would seem even more accurate to use:

ASSERT(!cpumask_test_cpu(cpu, mask) || !(flags & FLUSH_VCPU_STATE));

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.