WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] BUG? XEN don't take all CPUS when "(dom0-cpus 0)" is set

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] BUG? XEN don't take all CPUS when "(dom0-cpus 0)" is set
From: Fabian Holler <fho@xxxxxxxxxxx>
Date: Thu, 24 Aug 2006 11:14:34 +0200
Delivery-date: Thu, 24 Aug 2006 02:14:50 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.4 (X11/20060714)
Howdy,

i have Debian running + xen-hypervisor-3.0-i386/testing uptodate
3.0.2+hg9697-1 on 2 CPU machine.

If i set "(dom0-cpus 0)" in  xend-config.sxp, dom0 only will use one CPU
(only one CPU listed in /proc/cpuinfo).
If i set "(dom0-cpus 2)", dom0 will use both CPUs.

But in the example xend-config.sxp, there parameter is explained as follows:
"# In SMP system, dom0 will use dom0-cpus # of CPUS
# If dom0-cpus = 0, dom0 will take all cpus available"

Is this a Bug? Or what is the reason why dom0 don't will take both CPUs
if dom0-cpus 0 is set?


greetings

Fabian

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>