WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, a

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, add vcpu to cpu map
From: Sam Gill <samg@xxxxxxxxxxxxx>
Date: Thu, 14 Apr 2005 10:34:15 -0700
Delivery-date: Thu, 14 Apr 2005 17:30:42 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <E1DM7eW-0002zR-9S@host-192-168-0-1-bcn-london>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <E1DM7eW-0002zR-9S@host-192-168-0-1-bcn-london>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 0.7.3 (Windows/20040803)
Message: 6
Date: Thu, 14 Apr 2005 11:24:07 -0500
From: Ryan Harper <ryanh@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support
	vcpus,	add vcpu to cpu map
To: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Cc: Ryan Harper <ryanh@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Message-ID: <20050414162407.GG27571@xxxxxxxxxx>
Content-Type: text/plain; charset=us-ascii

* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2005-04-14 10:50]:
> > The following patch updates the dom0 pincpu operation to read 
> > the VCPU value from the xend interface rather than 
> > hard-coding the exec_domain to 0.  This prevented pinning 
> > VCPUS other than 0 to a particular cpu.  I added the number 
> > of VCPUS to the main xm list output and also included a new 
> > sub-option to xm list to display the VCPU to CPU mapping.  
> > While working on the pincpu code, I fixed an out-of-bounds 
> > indexing for the pincpu operation that wasn't previously 
> > exposed since the vcpu/exec_domain value was hard-coded to 0.
    
> 
> Ryan, good progress, but I'd like to propose a couple of extentions:
> 
> It would be useful if you could update it so that pincpu enabled you to
> specify a set of physical CPUs for each VCPU e.g.
> 
> "xm pincpu mydom 1 2,4-6" which would allow VCPU 1 of mydom to run on
> CPUs 2,4 and 5 but no others. -1 would still mean "run anywhere". Having
> this functionality is really important before we can implement any kind
> of CPU load ballancer.
  

> Interesting idea.  I don't see anything in the schedulers that would
> take advantage of that sort of definition.  AFAIK, exec_domains are
> never migrated unless told to do so via pincpu.  Does the new scheduler
> do this?  Or is this more of setting up the rules that the load balancer
> would query to find out where it can migrate vcpus?

> Secondly, I think it would be really good if we could have some
> hierarchy in CPU names. Imagine a 4 socket system with dual core hyper
> threaded CPUs. It would be nice to be able to specify the 3rd socket,
> 1st core, 2nd hyperthread as CPU "2.0.1".
> 
> Where we're on a system without one of the levels of hierarchy, we just
> miss it off. E.g. a current SMP Xeon box would be "x.y". This would be
> much less confusing than the current scalar representation.
  

> I like the idea of being able to specify "where" the vcpu runs more
> explicitly than 'cpu 0', which does not give any indication of physical
> cpu characteristics. We would probably need to still provide a simple
> mapping, but allow the pincpu interface to support a more specific
> target as well as the more generic.
> 2-way hyperthreaded box:
> CPU     SOCKET.CORE.THREAD
> 0       0.0.0
> 1       0.0.1
> 2       1.0.0
> 3       1.0.1

Just my opinion, but for end users, and people who are going to have to configure this
whole system, it would be a far greater impact to just develop a simple tool that just shows
you how many cpus you have to work with. (also a debugging tool, to see if your cpus are registering)

such as "xm pincpu-show" and "xm pincpu-show-details" for a more verbose listing

and once you developed the function that could return those values, you could use that function
to map different domains to different cpus, or different cpus to different domains.

Then the next step would be creating some helper functions "xm pincpu-add" so you could add a cpu to
a domain, or "xm pincpu-move" to move a cpu from one domain to another. In addition you could have
"xm pincpu-lock"/"xm pincpu-unlock" which would only allow one single domain to access that cpu.

I am just thinking that maybe if you detail (if you have already not done so) what you want the end result to
be, than it might be easier to figure out how to implement the lower level functions more efficiently.

Thanks,
 Sam Gill


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel