[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 16/32] xen/x86: allow disabling the pmtimer



>>> On 04.11.15 at 17:05, <roger.pau@xxxxxxxxxx> wrote:
> El 03/11/15 a les 13.41, Jan Beulich ha escrit:
>>>>> On 03.11.15 at 11:57, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 03/11/15 07:21, Jan Beulich wrote:
>>>>>>> On 30.10.15 at 16:36, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>> On 30/10/15 13:16, Jan Beulich wrote:
>>>>>>>>> On 30.10.15 at 13:50, <roger.pau@xxxxxxxxxx> wrote:
>>>>>>> El 14/10/15 a les 16.37, Jan Beulich ha escrit:
>>>>>>>>>>> On 02.10.15 at 17:48, <roger.pau@xxxxxxxxxx> wrote:
>>>>>>>>> Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx>
>>>>>>>>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>>>>>>>>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>>>>>>> ---
>>>>>>>>> Changes since v6:
>>>>>>>>>  - Return ENODEV in pmtimer_load if the timer is disabled.
>>>>>>>>>  - hvm_acpi_power_button and hvm_acpi_sleep_button become noops if the
>>>>>>>>>    pmtimer is disabled.
>>>>>>>> But how are those two features connected? I don't think you can
>>>>>>>> assume absence of a PM block just because there's no PM timer.
>>>>>>>> Or if you want to tie them together for now, the predicate needs
>>>>>>>> to be renamed.
>>>>>>>>
>>>>>>>>>  - Return ENODEV if pmtimer_change_ioport is called with the pmtimer
>>>>>>>>>    disabled.
>>>>>>>> Same here.
>>>>>>> What about changing XEN_X86_EMU_PMTIMER into XEN_X86_EMU_PM and this
>>>>>>> flags disables all PM stuff?
>>>>>> Ah, right, that's a reasonable option.
>>>>> It still might be a nice idea to split them in two, given future work.
>>>>>
>>>>> To support hotplug properly (cpu, ram and pci), Xen needs to inject
>>>>> GPEs, which comes from part of the PM infrastructure.  To support PCI
>>>>> devices in the future without the whole PM infrastructure, it would be
>>>>> nice to keep the split.
>>>> Coming back to this - I'm not sure: The hotplug aspect as you
>>>> mention it should matter for Dom0 only. DomU could (and perhaps
>>>> should) use a PV interface instead.
>>>
>>> I disagree.
>>>
>>> All PVH guests should use the same mechanism; making a split between
>>> dom0 and domU will only make our lives harder.
>>>
>>> Where reasonable, we should follow what happens on native; one of the
>>> underlying points of PVH is to have less of an impact on the guest
>>> side.  In some cases it is indeed nasty, but has the advantage of being
>>> well understood.
>> 
>> What meaning would ACPI have to a PVH DomU?
>> 
>>>> So I'd like to suggest quite the opposite: Don't call the thing PM,
>>>> but make it more general and call it ACPI. And instead of
>>>> separating HPET, we might have this fall under ACPI as well, or
>>>> we might have a second TIMER flag, requiring both to be set
>>>> for there to be a HPET and PMTMR. This leaves open the option
>>>> of Dom0 getting ACPI enabled (despite this then being "real",
>>>> not emulated ACPI), but TIMER left off.
>>>
>>> An HPET can exist independently of other features such as ACPI.  It
>>> should have its own option.
>> 
>> Without ACPI there's no defined way to discover it. Doing what
>> Linux does - applying chipset knowledge - won't work on PVH either,
>> because there's no emulated chipset. Which would leave scanning
>> physical memory, but if there is none, none can be found.
>> 
>>> +1 to having an ACPI option, but as indicated above, I expect it to be
>>> used in the longterm even for domU.
>> 
>> Again - why and how?
> 
> I think that at this point in the design it's not so important to have
> all the XEN_X86_EMU_* properly defined. This is not a public interface,
> so we can expand/reduce them whenever we want. Would it be fine, for the
> time being to just have a XEN_X86_EMU_PM and control both the PM and the
> PMTMR?

I think so, yes.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.