WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Span VCPU across multiple PCPUS

To: Kishore kumar Samudrala <bf.openxen@xxxxxxxxx>
Subject: Re: [Xen-devel] Span VCPU across multiple PCPUS
From: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Date: Mon, 7 Feb 2011 09:03:34 +0000
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 07 Feb 2011 01:05:27 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTi=bhiBYeeSc6TPL_YyojDx+4U=Skd9WrkaHeHYS@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <AANLkTi=bhiBYeeSc6TPL_YyojDx+4U=Skd9WrkaHeHYS@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Sun, 2011-02-06 at 07:23 +0000, Kishore kumar Samudrala wrote:
> Hello,
> Is that possible to allow a single VCPU to run on multiple PCPUS, say
> 2 PCPUS. If so how can it be done.

Unless you explicitly pin a VCPU to a particular subset of PCPUs the
default is for a VCPU to be schedulable on any PCPU.

>  Do i need to modify any structures or have a special code to
> parallelize VCPUs.

Obviously a single VCPU can only run on a single PCPU at a time.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>