WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, ad

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, add vcpu to cpu map
From: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Date: Thu, 14 Apr 2005 18:55:29 +0100
Cc: Ryan Harper <ryanh@xxxxxxxxxx>, Sam Gill <samg@xxxxxxxxxxxxx>
Delivery-date: Thu, 14 Apr 2005 17:59:16 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20050414175141.GJ27571@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <E1DM7eW-0002zR-9S@host-192-168-0-1-bcn-london> <425EA997.8060409@xxxxxxxxxxxxx> <20050414175141.GJ27571@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.8
> Yeah, I think we should add something that better shows the available
> resources.  Currently the total number of Physical CPUs a system has
> isn't really available in an obvious location.

xm info lists this as "packages", I think.

If the enumeration is done in a standardised way then it's possible to work 
out in userspace what CPU id is where but it's not at all obvious to the user 
right now.  Would definitely be good for the management tools to give more 
information to the user on this stuff.

Cheers,
Mark

> > such as "xm pincpu-show" and "xm pincpu-show-details" for a more verbose
> > listing
>
> What would these look like?
>
> > Then the next step would be creating some helper functions "xm
> > pincpu-add" so you could add a cpu to
> >
> >
> > a domain, or "xm pincpu-move" to move a cpu from one domain to another.
> > In addition you could have
> >
> > "xm pincpu-lock"/"xm pincpu-unlock" which would only allow one single
> > domain to access that cpu.
>
> I think the mapping that Ian mentioned was needed for load-balancing
> would achieve that, but we certainly could create an interface wrapper,
> like lock/unlock that was translated into the correct mapping command.
>
> > I am just thinking that maybe if you detail (if you have already not
> > done so) what you want the end result to
> > be, than it might be easier to figure out how to implement the lower
> > level functions more efficiently.
>
> No, this is good things to be talking about.  The goal of this patch was
> to allow us to pin VCPUs mainly so we can test space-sharing versus
> time-sharing of VCPUs.  That is, if we have a 4-way SMP box, with two
> domUs, each with four VCPUs, what is the perf difference between domUs each
> getting 2 physical cpus to run their 4 VCPUs versus domUs having access
> to all 4 physical cpus on which to run their 4 VCPUs.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel