[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Core parking feature enable



Liu, Jinsong wrote:
> Liu, Jinsong wrote:
>> Jan Beulich wrote:
>>>>>> On 17.02.12 at 18:48, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx>
>>>>>> wrote:
>>>> Jan Beulich wrote:
>>>>>>>> On 17.02.12 at 09:54, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx>
>>>>>>>> wrote:
>>>>>> Core parking is a power control feature and it can co-work with
>>>>>> NPTM to control system power budget through online/offline some
>>>>>> CPUs in the system. These patches implement core parking feature
>>>>>> for xen. They consist of 2 parts: dom0 patches and xen
>>>>>> hypervisor patches. 
>>>>>> 
>>>>>> At dom0 side, patches include
>>>>>> [Patch 1/3] intercept native pad (Processor Aggregator Device)
>>>>>> logic, providing a native interface for natvie platform and a
>>>>>> paravirt template for paravirt platform, so that os can
>>>>>> implicitly hook to proper ops accordingly; [Patch 2/3] redirect
>>>>>> paravirt template to Xen pv ops; [Patch 3/3] implement Xen pad
>>>>>> logic, and when getting pad device notification, it hypercalls
>>>>>> to Xen hypervisor for core parking. Due to the characteristic of
>>>>>> xen continue_hypercall_on_cpu, dom0 seperately send/get core
>>>>>> parking request/result; 
>>>>>> 
>>>>>> At Xen hypervisor side, patches include
>>>>>> [Patch 1/2] implement hypercall through which dom0 send core
>>>>>> parking request, and get core parking result;
>>>>>> [Patch 2/2] implement Xen core parking. Different core parking
>>>>>> sequence has different power/performance result, due to cpu
>>>>>> socket/core/thread topology. This patch provide power-first and
>>>>>> performance-first policies, users can choose core parking policy
>>>>>> on their own demand, considering power and performance tradeoff.
>>>>> 
>>>>> Does this really need to be implemented in the hypervisor? All
>>>>> this boils down to is a wrapper around cpu_down() and cpu_up(),
>>>>> which have hypercall interfaces already. So I'd rather see this
>>>>> as being an extension to Dom0's pCPU management patches (which
>>>>> aren't upstream afaict)... 
>>>>> 
>>>>> Jan
>>>> 
>>>> It's a design choice. Core parking is not only a wrapper around
>>>> cpu_down/up, it also involves policy algorithms which depend on
>>>> physical cpu topology and cpu_online/present_map, etc. Implement
>>>> core parking at dom0 side need expose all those information to
>>>> dom0, with potential issues (like coherence), while dom0 still
>>>> need do same work as hypervisor. Our idea is to keep dom0 as ACPI
>>>> parser, then hypercall and do rest things at hypervisor side.
>>> 
>>> Actually, after some more thought, I don't even think this ought to
>>> be implemented in the Dom0 kernel, but in user space altogether.
>>> Afaict all information necessary is already being exposed.
>>> 
>> 
>> No, user space lack necessary information. If I didn't misunderstand,
>> it has some dom0-side dependencies not ready now, like
>> 1. sysfs interface, and exposing xen pcpu topology and maps;
>> 2. intecept pad notify and call usermodehelper;
>> 3. a daemon to monitor/policy core parking (daemon enable when linux
>> run as pvops under xen (kernel acpi_pad disable now), daemon disable
>> when linux run under baremetal (kernel acpi_pad enable now))
>> 
>> Seems keep same approach as native kernel which handle acpi_pad in
>> kernel side (for us, in hypervisor side) is a reasonable choice. Per
>> my understanding core parking is a co-work part of NPTM, the whole
>> process is basically a remote controller-microengine-bios-kernel
>> process, not necessarily involve user action.
>> 
> 
> Any comments?
> 

Sorry, forgot to re-attach patches :-)

Attachment: xen_core_parking_1.patch
Description: xen_core_parking_1.patch

Attachment: xen_core_parking_2.patch
Description: xen_core_parking_2.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.