WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [Patch] use full-size cpumask for vcpu-pin

To: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>, Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [Patch] use full-size cpumask for vcpu-pin
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Fri, 13 Aug 2010 13:58:36 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 13 Aug 2010 06:00:28 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <19557.15157.439902.435164@xxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acs6454o68XwwGc+SoymZc0K1779kgAA5+01
Thread-topic: [Xen-devel] [Patch] use full-size cpumask for vcpu-pin
User-agent: Microsoft-Entourage/12.24.0.100205
On 13/08/2010 13:31, "Ian Jackson" <Ian.Jackson@xxxxxxxxxxxxx> wrote:

> Juergen Gross writes ("[Xen-devel] [Patch] use full-size cpumask for
> vcpu-pin"):
>> attached patch solves a problem with vcpu-pinning and hot-plug of cpus:
> 
> Thanks.  This is a mixed tools/hypervisor patch.  We've discussed it
> and it seems good to me.  Keir, would you care to apply it, or would
> you like it to go through the tools tree ?
> 
> Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>

Actually, now I look at it, I have to NACK the patch. Sorry I didn't look
closely enough earlier. I think the bug can be addressed without any
hypervisor changes: when vcpu-pinning, the tools can quite correctly pass a
cpumask to Xen just big enough to express just the CPUs in the new affinity
set. If the resulting mask is too narrow to address all CPUs in the system,
then Xen will pad it out with zeroes. If the resulting mask is too wide, Xen
will simply truncate it. All this is done silently at the time of the
setvcpuaffinity hypercall. Hence, Juergen's hypervisor changes are really
unnecessary, and a neater tools fix could probably be arrived at without the
hypervisor changes.

 -- Keir

> Ian.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel