[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Core parking feature enable



On 02/03/2012 09:42, "Haitao Shan" <maillists.shan@xxxxxxxxx> wrote:

> I would really doubt the need to create a new interface of receiving
> ACPI event and sending to user land (other than existing native
> kernel) specifically for Xen. What's the benefit and why kernel people
> should buy-in that?
> Core parking is a platform feature, not virtualization feature.
> Naturally following native approach is the most efficient. Why do you
> want to create yet another interface for Xen to do that?

While I sympathise with your position rather more than Jan does, the fact is
that it's *you* who are suggesting yet-another-Xen-interface. Whereas doing
it in userspace requires only existing hypercalls I believe.

 -- Keir

> Shan Haitao
> 
> 2012/3/1 Jan Beulich <JBeulich@xxxxxxxx>:
>>>>> On 01.03.12 at 15:31, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote:
>>> Jan Beulich wrote:
>>>>>>> On 01.03.12 at 12:14, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote:
>>>>> Unfortunately, yes, though cumbersome is not basic reason user space
>>>>> approach is not preferred. Core parking is a power management staff,
>>>>> based on dynamic physical details like cpu topologies and maps owned
>>>>> by hypervisor. It's natural to implement
>>>> 
>>>> CPU topology is available to user space, and as far as I recall your
>>>> hypervisor patch didn't really manipulate any maps - all it did was
>>>> pick what CPU to bring up/down, and then carry out that decision.
>>> 
>>> No. threads_per_core and cores_per_socket exposed to userspace is pointless
>>> to us (and, it's questionable need fixup).
>> 
>> Sure this would be insufficient. But what do you think did
>> XEN_SYSCTL_topologyinfo get added for?
>> 
>>> Core parking depends on following physical info (no matter where it
>>> implement):
>>> 1. cpu_online_map;
>>> 2. cpu_present_map;
>>> 3. cpu_core_mask;
>>> 4. cpu_sibling_mask;
>>> all of them are *dynamic*, especially, 3/4 are varied per cpu and per
>>> online/offline ops.
>> 
>> Afaict all of these can be reconstructed using (mostly sysctl)
>> hypercalls.
>> 
>> Jan
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.