WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support vcpus, add

To: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support vcpus, add vcpu to cpu map
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Thu, 14 Apr 2005 11:24:07 -0500
Cc: Ryan Harper <ryanh@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 14 Apr 2005 16:24:03 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D1E3B7C@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D1E3B7C@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2005-04-14 10:50]:
> > The following patch updates the dom0 pincpu operation to read 
> > the VCPU value from the xend interface rather than 
> > hard-coding the exec_domain to 0.  This prevented pinning 
> > VCPUS other than 0 to a particular cpu.  I added the number 
> > of VCPUS to the main xm list output and also included a new 
> > sub-option to xm list to display the VCPU to CPU mapping.  
> > While working on the pincpu code, I fixed an out-of-bounds 
> > indexing for the pincpu operation that wasn't previously 
> > exposed since the vcpu/exec_domain value was hard-coded to 0.
> 
> Ryan, good progress, but I'd like to propose a couple of extentions:
> 
> It would be useful if you could update it so that pincpu enabled you to
> specify a set of physical CPUs for each VCPU e.g.
> 
> "xm pincpu mydom 1 2,4-6" which would allow VCPU 1 of mydom to run on
> CPUs 2,4 and 5 but no others. -1 would still mean "run anywhere". Having
> this functionality is really important before we can implement any kind
> of CPU load ballancer.

Interesting idea.  I don't see anything in the schedulers that would
take advantage of that sort of definition.  AFAIK, exec_domains are
never migrated unless told to do so via pincpu.  Does the new scheduler
do this?  Or is this more of setting up the rules that the load balancer
would query to find out where it can migrate vcpus?

> Secondly, I think it would be really good if we could have some
> hierarchy in CPU names. Imagine a 4 socket system with dual core hyper
> threaded CPUs. It would be nice to be able to specify the 3rd socket,
> 1st core, 2nd hyperthread as CPU "2.0.1".
> 
> Where we're on a system without one of the levels of hierarchy, we just
> miss it off. E.g. a current SMP Xeon box would be "x.y". This would be
> much less confusing than the current scalar representation.

I like the idea of being able to specify "where" the vcpu runs more
explicitly than 'cpu 0', which does not give any indication of physical
cpu characteristics.  We would probably need to still provide a simple
mapping, but allow the pincpu interface to support a more specific
target as well as the more generic.

2-way hyperthreaded box:
CPU     SOCKET.CORE.THREAD
0       0.0.0
1       0.0.1
2       1.0.0
3       1.0.1

That look sane?

--
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel