[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 00/12] Convert cpu_up/down to device_online/offline



Using cpu_up/down directly to bring cpus online/offline loses synchronization
with sysfs and could suffer from a race similar to what is described in
commit a6717c01ddc2 ("powerpc/rtas: use device model APIs and serialization
during LPM").

cpu_up/down seem to be more of a internal implementation detail for the cpu
subsystem to use to boot up cpus, perform suspend/resume and low level hotplug
operations. Users outside of the cpu subsystem would be better using the device
core API to bring a cpu online/offline which is the interface used to hotplug
memory and other system devices.

Several users have already migrated to use the device core API, this series
converts the remaining users and hides cpu_up/down from internal users at the
end.

I still need to update the documentation to remove references to cpu_up/down
and advocate for device_online/offline instead if this series will make its way
through.

I noticed this problem while working on a hack to disable offlining
a particular CPU but noticed that setting the offline_disabled attribute in the
device struct isn't enough because users can easily bypass the device core.
While my hack isn't a valid use case but it did highlight the inconsistency in
the way cpus are being onlined/offlined and this attempt hopefully improves on
this.

The first 6 patches fixes arch users.

The next 5 patches fixes generic code users. Particularly creating a new
special exported API for the device core to use instead of cpu_up/down.
Maybe we can do something more restrictive than that.

The last patch removes cpu_up/down from cpu.h and unexport the functions.

In some cases where the use of cpu_up/down seemed legitimate, I encapsulated
the logic in a higher level - special purposed function; and converted the code
to use that instead.

I did run the rcu torture, lock torture and psci checker tests and no problem
was noticed. I did perform build tests on all arch affected except for parisc.

Hopefully I got the CC list right for all the patches. Apologies in advance if
some people were omitted from some patches but they should have been CCed.

CC: Armijn Hemel <armijn@xxxxxxxxxx>
CC: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>
CC: Bjorn Helgaas <bhelgaas@xxxxxxxxxx>
CC: Borislav Petkov <bp@xxxxxxxxx>
CC: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
CC: Catalin Marinas <catalin.marinas@xxxxxxx>
CC: Christophe Leroy <christophe.leroy@xxxxxx>
CC: Daniel Lezcano <daniel.lezcano@xxxxxxxxxx>
CC: Davidlohr Bueso <dave@xxxxxxxxxxxx>
CC: "David S. Miller" <davem@xxxxxxxxxxxxx>
CC: Eiichi Tsukata <devel@xxxxxxxxxxxx>
CC: Enrico Weigelt <info@xxxxxxxxx>
CC: Fenghua Yu <fenghua.yu@xxxxxxxxx>
CC: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
CC: Helge Deller <deller@xxxxxx>
CC: "H. Peter Anvin" <hpa@xxxxxxxxx>
CC: Ingo Molnar <mingo@xxxxxxxxxx>
CC: "James E.J. Bottomley" <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx>
CC: James Morse <james.morse@xxxxxxx>
CC: Jiri Kosina <jkosina@xxxxxxx>
CC: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
CC: Josh Triplett <josh@xxxxxxxxxxxxxxxx>
CC: Juergen Gross <jgross@xxxxxxxx>
CC: Lorenzo Pieralisi <lorenzo.pieralisi@xxxxxxx>
CC: Mark Rutland <mark.rutland@xxxxxxx>
CC: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
CC: Nadav Amit <namit@xxxxxxxxxx>
CC: Nicholas Piggin <npiggin@xxxxxxxxx>
CC: "Paul E. McKenney" <paulmck@xxxxxxxxxx>
CC: Paul Mackerras <paulus@xxxxxxxxx>
CC: Pavankumar Kondeti <pkondeti@xxxxxxxxxxxxxx>
CC: "Peter Zijlstra (Intel)" <peterz@xxxxxxxxxxxxx>
CC: "Rafael J. Wysocki" <rafael@xxxxxxxxxx>
CC: Ram Pai <linuxram@xxxxxxxxxx>
CC: Richard Fontana <rfontana@xxxxxxxxxx>
CC: Sakari Ailus <sakari.ailus@xxxxxxxxxxxxxxx>
CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
CC: Steve Capper <steve.capper@xxxxxxx>
CC: Thiago Jung Bauermann <bauerman@xxxxxxxxxxxxx>
CC: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
CC: Tony Luck <tony.luck@xxxxxxxxx>
CC: Will Deacon <will@xxxxxxxxxx>
CC: Zhenzhong Duan <zhenzhong.duan@xxxxxxxxxx>
CC: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
CC: linux-ia64@xxxxxxxxxxxxxxx
CC: linux-kernel@xxxxxxxxxxxxxxx
CC: linux-parisc@xxxxxxxxxxxxxxx
CC: linuxppc-dev@xxxxxxxxxxxxxxxx
CC: sparclinux@xxxxxxxxxxxxxxx
CC: x86@xxxxxxxxxx
CC: xen-devel@xxxxxxxxxxxxxxxxxxxx


Qais Yousef (12):
  arm64: hibernate.c: create a new function to handle cpu_up(sleep_cpu)
  x86: Replace cpu_up/down with devcie_online/offline
  powerpc: Replace cpu_up/down with device_online/offline
  ia64: Replace cpu_down with freeze_secondary_cpus
  sparc: Replace cpu_up/down with device_online/offline
  parisc: Replace cpu_up/down with device_online/offline
  driver: base: cpu: export device_online/offline
  driver: xen: Replace cpu_up/down with device_online/offline
  firmware: psci: Replace cpu_up/down with device_online/offline
  torture: Replace cpu_up/down with device_online/offline
  smp: Create a new function to bringup nonboot cpus online
  cpu: Hide cpu_up/down

 arch/arm64/kernel/hibernate.c          | 13 +++----
 arch/ia64/kernel/process.c             |  8 +---
 arch/parisc/kernel/processor.c         |  4 +-
 arch/powerpc/kernel/machine_kexec_64.c |  4 +-
 arch/sparc/kernel/ds.c                 |  8 +++-
 arch/x86/kernel/topology.c             |  4 +-
 arch/x86/mm/mmio-mod.c                 |  8 +++-
 arch/x86/xen/smp.c                     |  4 +-
 drivers/base/core.c                    |  4 ++
 drivers/base/cpu.c                     |  4 +-
 drivers/firmware/psci/psci_checker.c   |  6 ++-
 drivers/xen/cpu_hotplug.c              |  2 +-
 include/linux/cpu.h                    |  6 ++-
 kernel/cpu.c                           | 53 ++++++++++++++++++++++++--
 kernel/smp.c                           |  9 +----
 kernel/torture.c                       | 15 ++++++--
 16 files changed, 106 insertions(+), 46 deletions(-)

-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.