[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel][PATCH]pcpu tuples [was Re: [Xen-devel] Xen 3.4.1 NUMA support]



Hi, dulloor/keir, for the changes to XEN_SYSCTL_physinfo, I'm not sure part of 
my work will be helpful.

I sent out a patch to present pcpu information in dom0's 
/sys/devices/system/xen_pcpu/pcpuX directory. Currently I simply present the 
apic_id/acpi_id in this directory for cpu hotplug.
But how about using it to present the whole topology information, for example, 
we can add initial_apicid/core_id etc to the xen_pcpu/pcpuX directory.

Furthermore, if we can create a directory like 
/sys/devices/system/xen_pcpu/pcpuX/topology, make the arrangement below this 
directory same as native linux's cpu directory in sysfs, I think it will 
benifit more. For example, we can simply change current linux tools to show the 
topo information (just /sys/device/system/cpu/ to  
/sys/devices/system/xen_pcpu/).

Of course, I understand maybe most management tools for virtualization will be 
different with native tools, but this similar arrangement will be helpful still.

Any idea?

--jyh

xen-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote:
> [Xen-devel] Xen 3.4.1 NUMA support]
> 
> I think this is good. However, the socket and node ids can be fairly
> arbitrary small numbers -- we need a way for the admin to find out the
> topology and 'addresses' of physical cpus via xm. Perhaps a new 'xm
> cpu-list' command to basically dump the pcpu_tuple information
> in ascending
> order of node, then socket, then core, then thread, with one
> row per cpu:
> node socket core thread xen-cpu-id
> 
> More info could be added beyond these five pieces of information, as
> we later see fit. 
> 
> An alternative would be to rename the socket/node identifiers in
> pyxc_physinfo, or even in Xen itself, to achieve contiguity.
> However I think
> a cpu-list command would still be useful, and it's easy to implement.
> 
> -- Keir
> 
> On 17/11/2009 06:56, "Dulloor" <dulloor@xxxxxxxxx> wrote:
> 
>> Attached is a patch to construct pcpu tuples of the form
>> (node.socket.core.thread), and (currently)used by xm vcpu-pin
>> utility. 
>> 
>> -dulloor
>> 
>> On Fri, Nov 13, 2009 at 11:02 AM, Keir Fraser
>> <keir.fraser@xxxxxxxxxxxxx> wrote:
>>> On 13/11/2009 15:40, "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxxx> wrote:
>>> 
>>>>> Even better would be to have pCPUs addressable and listable
>>>>> explicitly as dotted tuples. That can be implemented entirely
>>>>> within the toolstack, and could even allow wildcarding of tuple
>>>>> components to efficiently express cpumasks.
>>>> 
>>>> Yes, I'd certainly like to see the toolstack support dotted tuple
>>>> notation. 
>>>> 
>>>> However, I just don't trust the toolstack to get this right unless
>>>> xen has already set it up nicely for it with a sensible
>>>> enumeration and defined sockets-per-node, cores-per-socket and
>>>> threads-per-core parameters. Xen should provide a clean interface
>>>> to the toolstack in this respect. 
>>> 
>>> Xen provides a topology-interrogation hypercall which should
>>> suffice for tools to build up a {node,socket,core,thread}<->cpuid
>>> mapping table. 
>>> 
>>>  -- Keir
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>>> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.