[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 00/12] xen: support per-cpupool scheduling granularity



Support scheduling granularity per cpupool. Setting the granularity is
done via hypfs, which needed to gain dynamical entries for that
purpose.

Apart from the hypfs related additional functionality the main change
for cpupools was the support for moving a domain to a new granularity,
as this requires to modify the scheduling unit/vcpu relationship.

I have tried to do the hypfs modifications in a rather generic way in
order to be able to use the same infrastructure in other cases, too
(e.g. for per-domain entries).

The complete series has been tested by creating cpupools with different
granularities and moving busy and idle domains between those.

Juergen Gross (12):
  xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
  xen/cpupool: add missing bits for per-cpupool scheduling granularity
  xen/sched: support moving a domain between cpupools with different
    granularity
  xen/sched: sort included headers in cpupool.c
  docs: fix hypfs path documentation
  xen/hypfs: move per-node function pointers into a dedicated struct
  xen/hypfs: pass real failure reason up from hypfs_get_entry()
  xen/hypfs: support dynamic hypfs nodes
  xen/hypfs: add support for id-based dynamic directories
  xen/hypfs: add cpupool directories
  xen/hypfs: add scheduling granularity entry to cpupool entries
  xen/cpupool: make per-cpupool sched-gran hypfs node writable

 docs/misc/hypfs-paths.pandoc |  18 ++-
 xen/common/hypfs.c           | 233 +++++++++++++++++++++++++++--------
 xen/common/sched/core.c      | 122 +++++++++++++-----
 xen/common/sched/cpupool.c   | 213 +++++++++++++++++++++++++++++---
 xen/common/sched/private.h   |   1 +
 xen/include/xen/hypfs.h      | 106 +++++++++++-----
 xen/include/xen/param.h      |  15 +--
 7 files changed, 567 insertions(+), 141 deletions(-)

-- 
2.26.2




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.