[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests



On 11/15/2016 03:07 PM, Andrew Cooper wrote:
> On 15/11/16 19:38, Boris Ostrovsky wrote:
>> On 11/15/2016 02:19 PM, Andrew Cooper wrote:
>>> On 15/11/16 15:56, Jan Beulich wrote:
>>>>>>> On 15.11.16 at 16:44, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>>>> On 11/15/2016 10:17 AM, Jan Beulich wrote:
>>>>>>> The other option was XEN_X86_EMU_ACPI. Would it be better?
>>>>>> As that's a little too wide (and I think someone else had also
>>>>>> disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
>>>>>> (for "fixed features"), or if that's still too wide, break things up
>>>>>> (PM1a, PM1b, PM2, TMR, GPE0, GPE1)?
>>>>> I think this may be a bit too fine-grained. Fixed-features would be
>>>>> good, but is GPE block considered part of fixed features?
>>>> See figure 4-12 in ACPI 6.1: GEP{0,1} are included there, and the
>>>> text ahead of this makes it pretty clear that altogether they're
>>>> being called fixed hardware register blocks. So if you consider FF
>>>> misleading, FHRB would be another option.
>>> Please can we also considering a naming appropriate for joint use with
>>> HVM guests as well.
>>>
>>> For PVH, (if enabled), Xen handles all (implemented) fixed function
>>> registers.
>>>
>>> For HVM, Xen already intercepts and interposes on the PM1a_STS and
>>> PM1a_EN registers heading towards qemu, for the apparent purpose of
>>> raising SCIs on behalf of qemu.
>>>
>>> When we want to enable ACPI vcpu hotplug for HVM guests, 
>> What do you mean by "when"? We *are* doing ACPI hotplug for HVM guests,
>> aren't we?
> Are we?  If so, how?
>
> I don't see any toolstack or qemu code able to cope with APCI CPU
> hotplug.  I can definitely see ACPI PCI hotplug in qemu, but that does
> make sense.

piix4_acpi_system_hot_add_init():
   acpi_cpu_hotplug_init(parent, OBJECT(s), &s->gpe_cpu,
                            PIIX4_CPU_HOTPLUG_IO_BASE);


>
>> Or are you thinking about moving this functionality to the hypervisor?
> As an aside, we need to move some part of PCI hotplug into the
> hypervisor longterm.  At the moment, any new entity coming along and
> attaching to an ioreq server still needs to negotiate with Qemu to make
> the device appear.  This is awkward but doable if all device models are
> in dom0, but is far harder if the device models are in different domains.
>
> As for CPU hotplug, (if I have indeed overlooked something), Qemu has no
> business in this matter. 

Yes. And if we are going to do it for PVH we might as well do it for HVM
--- I think most of the code will be the same, save for how SCI is sent.

-boris

>  The device model exists to be an
> implementation of an LPC bridge, and is not responsible for any CPU
> related functionality; Xen does all vcpu handling.
>
>
> The Xen project and community have had a very rich history of hacking
> things up in the past, an frankly, it shows.  I want to ensure that
> development progresses in an architecturally clean and appropriate
> direction, especially if this enables us to remove some of the duck tape
> holding pre-existing features together.
>
> ~Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.