[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] XEN Proposal



I remember seeing some post before about domain group scheduler.
Not sure about its progress now, and maybe you can check that
thread to see anything useful?

Thanks,
Kevin

>From:Juergen Gross
>Sent: Wednesday, December 10, 2008 9:10 PM
>
>Hi,
>
>Currently the XEN credit scheduler has its pitfalls in 
>supporting weights of
>domains together with cpu pinning (see the threads
>http://lists.xensource.com/archives/html/xen-devel/2007-02/msg0
>0006.html
>http://lists.xensource.com/archives/html/xen-devel/2006-10/msg0
>0365.html
>http://lists.xensource.com/archives/html/xen-devel/2007-07/msg0
>0303.html
>which include a rejected patch).
>
>We are facing this problem, too. We tried the above patch, but 
>it didn't solve
>our problem completely, so we decided to start a new solution.
>
>Our basic requirement is to limit a set of domains to a set of 
>physical cpus
>while specifying the scheduling weight for each domain. The 
>general (and in my
>opinion best) solution would be the introduction of a "pool" 
>concept in XEN.
>
>Each physical cpu is dedicated to exactly one pool. At start 
>of XEN this is
>pool0. A domain is member of a single pool (dom0 will always 
>be member of
>pool0), there may be several domains in one pool. Scheduling 
>does not cross
>pool boundaries, so the weight of a domain is only related to 
>the weight of
>the other domains in the same pool. So it is possible to have 
>an own scheduler
>for each pool.
>
>What changes would be needed?
>- The hypervisor must be pool-aware. It needs information 
>about the pool
>  configuration (cpu mask, scheduler) and the pool membership 
>of a domain.
>  The scheduler must restrict itself to its own pool only.
>- There must be an interface to set and query the pool configuration.
>- At domain creation the domain must be added to a pool.
>- libxc must be expanded to support the new interfaces.
>- xend and the xm command must support pools, defaulting to 
>pool0 if no pool
>  is specified
>
>The xm commands could look like this:
>xm pool-create pool1 ncpu=4              # create a pool with 4 cpus
>xm pool-create pool2 cpu=1,3,5           # create a pool with 
>3 dedicated cpus
>xm pool-list                             # show pools:
>  pool      cpus          sched      domains
>  pool0     0,2,4         credit     0
>  pool1     6-9           credit     1,7
>  pool2     1,3,5         credit     2,3
>xm pool-modify pool1 ncpu=3              # set new number of cpus
>xm pool-modify pool1 cpu=6,7,9           # modify cpu-pinning
>xm pool-destroy pool1                    # destroy pool
>xm create vm5 pool=pool1                 # start domain in pool1
>
>There is much more potential in this approach:
>- add memory to a pool? Could be interesting for NUMA
>- recent discussions on xen-devel related to scheduling 
>(credit scheduler for
>  client virtualization) show some demand for further work 
>regarding priority
>  and/or grouping of domains
>- this might be an interesting approach for migration of 
>multiple related
>  domains (pool migration)
>- move (or migrate?) a domain to another pool
>- ...
>
>Any comments, suggestions, work already done, ...?
>Otherwise we will be starting our effort soon.
>
>Juergen
>
>-- 
>Juergen Gross                             Principal Developer
>IP SW OS6                      Telephone: +49 (0) 89 636 47950
>Fujitsu Siemens Computers         e-mail: 
>juergen.gross@xxxxxxxxxxxxxxxxxxx
>Otto-Hahn-Ring 6                Internet: www.fujitsu-siemens.com
>D-81739 Muenchen         Company details: 
>www.fujitsu-siemens.com/imprint.html
>
>
>
>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.