[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] [BUG#222] fix enforce_dom0_cpus to use vcpu_hotplug


  • To: Ryan Harper <ryanh@xxxxxxxxxx>
  • From: Christian Limpach <christian.limpach@xxxxxxxxx>
  • Date: Fri, 16 Sep 2005 20:12:02 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 16 Sep 2005 19:09:50 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=PbaG6BzKOrJDBYhWRqp/1F6Wxrx8YtLJxPr2YKJWBz615f4crMASK3UBzwOUy2qAVxuTiGcCnTDVrxSE34lIvPTYoE2JyAF23JZVAtOLplptWszKmSIk79l3HtIe7huM6wCprPnxL2oPFvaD4qL4hr92158twA2hu/ypAA0lIos=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Thanks!

On 9/15/05, Ryan Harper <ryanh@xxxxxxxxxx> wrote:
> This patch has enforce_dom0_cpus() use vcpu_hotplug rather than directly
> modifying the sysfs entries which would cause the xenstore state of
> a cpu's availability to be incorrect.  I've also modified slightly the
> dom0-cpus description in the xend-config.  Rather than specifying which
> dom0 vcpus are to be used, it is now a target of how many vcpus to used
> as pinvcpu ops are the preferred method for setting which physical cpu a
> vcpu uses.  In fixing this bug, I also uncovered another [1]bug related
> to hotplug in dom0.
> 
> 1. http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=228
> 
> --
> Ryan Harper
> Software Engineer; Linux Technology Center
> IBM Corp., Austin, Tx
> (512) 838-9253   T/L: 678-9253
> ryanh@xxxxxxxxxx
> 
> 
> diffstat output:
>  examples/xend-config.sxp            |    2 +-
>  python/xen/xend/XendDomain.py       |    1 +
>  python/xen/xend/XendDomainInfo.py   |   20 ++++++++++++++++++++
>  python/xen/xend/server/SrvDaemon.py |   27 ---------------------------
>  4 files changed, 22 insertions(+), 28 deletions(-)
> 
> Signed-off-by: Ryan Harper <ryanh@xxxxxxxxxx>
> ---
> diff -r c21f47a03225 tools/examples/xend-config.sxp
> --- a/tools/examples/xend-config.sxp    Thu Sep 15 17:09:50 2005
> +++ b/tools/examples/xend-config.sxp    Thu Sep 15 17:08:03 2005
> @@ -49,6 +49,6 @@
>  # If dom0-min-mem=0, dom0 will never balloon out.
>  (dom0-min-mem 0)
> 
> -# In SMP system, dom0 will use only CPUs in range [1,dom0-cpus]
> +# In SMP system, dom0 will use dom0-cpus # of CPUS
>  # If dom0-cpus = 0, dom0 will take all cpus available
>  (dom0-cpus 0)
> diff -r c21f47a03225 tools/python/xen/xend/XendDomain.py
> --- a/tools/python/xen/xend/XendDomain.py       Thu Sep 15 17:09:50 2005
> +++ b/tools/python/xen/xend/XendDomain.py       Thu Sep 15 17:08:03 2005
> @@ -155,6 +155,7 @@
>         if not dom0:
>             dom0 = self.domain_unknown(0)
>         dom0.dom0_init_store()
> +        dom0.enforce_dom0_cpus()
> 
>     def close(self):
>         pass
> diff -r c21f47a03225 tools/python/xen/xend/XendDomainInfo.py
> --- a/tools/python/xen/xend/XendDomainInfo.py   Thu Sep 15 17:09:50 2005
> +++ b/tools/python/xen/xend/XendDomainInfo.py   Thu Sep 15 17:08:03 2005
> @@ -1111,6 +1111,26 @@
>             # get run-time value of vcpus and update store
>             self.configure_vcpus(dom_get(self.domid)['vcpus'])
> 
> +    def enforce_dom0_cpus(self):
> +        dom = 0
> +        # get max number of cpus to use for dom0 from config
> +        from xen.xend import XendRoot
> +        xroot = XendRoot.instance()
> +        target = int(xroot.get_dom0_cpus())
> +        log.debug("number of cpus to use is %d"%(target))
> +
> +        # target = 0 means use all processors
> +        if target > 0:
> +            # count the number of online vcpus (cpu values in v2c map >= 0)
> +            vcpu_to_cpu  = dom_get(dom)['vcpu_to_cpu']
> +            vcpus_online = len(filter(lambda x: x>=0, vcpu_to_cpu))
> +            log.debug("found %d vcpus online"%(vcpus_online))
> +
> +            # disable any extra vcpus that are online over the requested 
> target
> +            for vcpu in range(target,vcpus_online):
> +                log.info("enforcement is disabling DOM%d VCPU%d"%(dom,vcpu))
> +                self.vcpu_hotplug(vcpu, 0)
> +
> 
>  def vm_field_ignore(_, _1, _2, _3):
>     """Dummy config field handler used for fields with built-in handling.
> diff -r c21f47a03225 tools/python/xen/xend/server/SrvDaemon.py
> --- a/tools/python/xen/xend/server/SrvDaemon.py Thu Sep 15 17:09:50 2005
> +++ b/tools/python/xen/xend/server/SrvDaemon.py Thu Sep 15 17:08:03 2005
> @@ -298,7 +298,6 @@
>         return self.cleanup(kill=True)
> 
>     def run(self, status):
> -        _enforce_dom0_cpus()
>         try:
>             log.info("Xend Daemon started")
>             event.listenEvent(self)
> @@ -323,32 +322,6 @@
>         #sys.exit(rc)
>         os._exit(rc)
> 
> -def _enforce_dom0_cpus():
> -    dn = xroot.get_dom0_cpus()
> -
> -    for d in glob.glob("/sys/devices/system/cpu/cpu*"):
> -        cpu = int(os.path.basename(d)[3:])
> -        if (dn == 0) or (cpu < dn):
> -            v = "1"
> -        else:
> -            v = "0"
> -        try:
> -            f = open("%s/online" %d, "r+")
> -            c = f.read(1)
> -            if (c != v):
> -                if v == "0":
> -                    log.info("dom0 is trying to give back cpu %d", cpu)
> -                else:
> -                    log.info("dom0 is trying to take cpu %d", cpu)
> -                f.seek(0)
> -                f.write(v)
> -                f.close()
> -                log.info("dom0 successfully enforced cpu %d", cpu)
> -            else:
> -                f.close()
> -        except:
> -            pass
> -
>  def instance():
>     global inst
>     try:
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.