[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 16/32] xen/x86: allow disabling the pmtimer



On 03/11/15 12:41, Jan Beulich wrote:
>>>> On 03.11.15 at 11:57, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 03/11/15 07:21, Jan Beulich wrote:
>>>>>> On 30.10.15 at 16:36, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> On 30/10/15 13:16, Jan Beulich wrote:
>>>>>>>> On 30.10.15 at 13:50, <roger.pau@xxxxxxxxxx> wrote:
>>>>>> El 14/10/15 a les 16.37, Jan Beulich ha escrit:
>>>>>>>>>> On 02.10.15 at 17:48, <roger.pau@xxxxxxxxxx> wrote:
>>>>>>>> Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx>
>>>>>>>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>>>>>>>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>>>>>> ---
>>>>>>>> Changes since v6:
>>>>>>>>  - Return ENODEV in pmtimer_load if the timer is disabled.
>>>>>>>>  - hvm_acpi_power_button and hvm_acpi_sleep_button become noops if the
>>>>>>>>    pmtimer is disabled.
>>>>>>> But how are those two features connected? I don't think you can
>>>>>>> assume absence of a PM block just because there's no PM timer.
>>>>>>> Or if you want to tie them together for now, the predicate needs
>>>>>>> to be renamed.
>>>>>>>
>>>>>>>>  - Return ENODEV if pmtimer_change_ioport is called with the pmtimer
>>>>>>>>    disabled.
>>>>>>> Same here.
>>>>>> What about changing XEN_X86_EMU_PMTIMER into XEN_X86_EMU_PM and this
>>>>>> flags disables all PM stuff?
>>>>> Ah, right, that's a reasonable option.
>>>> It still might be a nice idea to split them in two, given future work.
>>>>
>>>> To support hotplug properly (cpu, ram and pci), Xen needs to inject
>>>> GPEs, which comes from part of the PM infrastructure.  To support PCI
>>>> devices in the future without the whole PM infrastructure, it would be
>>>> nice to keep the split.
>>> Coming back to this - I'm not sure: The hotplug aspect as you
>>> mention it should matter for Dom0 only. DomU could (and perhaps
>>> should) use a PV interface instead.
>> I disagree.
>>
>> All PVH guests should use the same mechanism; making a split between
>> dom0 and domU will only make our lives harder.
>>
>> Where reasonable, we should follow what happens on native; one of the
>> underlying points of PVH is to have less of an impact on the guest
>> side.  In some cases it is indeed nasty, but has the advantage of being
>> well understood.
> What meaning would ACPI have to a PVH DomU?

Whatever is covered in the tables provided.

For hotplug, this is at minimum a PM block which can be used to inject GPEs.

>
>>> So I'd like to suggest quite the opposite: Don't call the thing PM,
>>> but make it more general and call it ACPI. And instead of
>>> separating HPET, we might have this fall under ACPI as well, or
>>> we might have a second TIMER flag, requiring both to be set
>>> for there to be a HPET and PMTMR. This leaves open the option
>>> of Dom0 getting ACPI enabled (despite this then being "real",
>>> not emulated ACPI), but TIMER left off.
>> An HPET can exist independently of other features such as ACPI.  It
>> should have its own option.
> Without ACPI there's no defined way to discover it. Doing what
> Linux does - applying chipset knowledge - won't work on PVH either,
> because there's no emulated chipset. Which would leave scanning
> physical memory, but if there is none, none can be found.

In reality, the legacy HPET always lives at 0xfed00000, so only a single
MMIO read is required to locate one.

As for the Linux chipset behaviour, that reminds me that I need to do
something similar in Xen to deny MMIO access.  At the moment, if the
legacy HPET is not exposed in the ACPI tables, Xen doesn't find the HPET
but Linux does, and attempts to play with interrupts.  It doesn't get
very far, but the kexec environment finds itself without a timesource,
as Linux disables legacy broadcast mode.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.