[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] million cycle interrupt


  • To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Puthiyaparambil, Aravindh" <aravindh.puthiyaparambil@xxxxxxxxxx>
  • Date: Mon, 13 Apr 2009 17:14:48 -0500
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc:
  • Delivery-date: Mon, 13 Apr 2009 15:15:18 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acm8dctR69FFyt9CRUGLBuuNOre8ZwADznSQ
  • Thread-topic: [Xen-devel] million cycle interrupt

I have only tried increasing the max_phys_cpus from 32 to 64, 96 etc. I would 
think that decreasing it to 4 means that only the first 4 LCPUS were booted. 
The rest should have been ignored.

-----Original Message-----
From: Dan Magenheimer [mailto:dan.magenheimer@xxxxxxxxxx]
Sent: Monday, April 13, 2009 4:24 PM
To: Puthiyaparambil, Aravindh; Xen-Devel (E-mail)
Subject: RE: [Xen-devel] million cycle interrupt

Thanks Aravindh!

Hmmm... I'm suspicious of this.  When compiled with
max_phys_cpus=4, xm info shows nr_cpus=4, nr_nodes=1,
cores_per_socket=1 and threads_per_core=1.  Without
this compile option, cores_per_socket=4 and
threads_per_core=2... so I would expect only one
of these to change.  Hopefully the compile option
is simply changing the data reported by xm info.
It appears the guest still is using 4 physical processors.

> -----Original Message-----
> From: Puthiyaparambil, Aravindh
> [mailto:aravindh.puthiyaparambil@xxxxxxxxxx]
> Sent: Monday, April 13, 2009 9:31 AM
> To: Dan Magenheimer; Xen-Devel (E-mail)
> Subject: RE: [Xen-devel] million cycle interrupt
>
>
> >>Is there a way to cap the number of physical cpus seen by Xen
> >>(other than nosmp to cap at one)?
>
> There is a compile time option (max_phys_cpus) where you can
> specify the number of LCPUs.
>
> Aravindh Puthiyaparambil
> Virtualization Engineer
> Virtual Systems Development
> Unisys, Tredyffrin PA
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.