[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 18/20] libxc/acpi: Build ACPI tables for HVMlite guests



On 06/07/2016 10:10 AM, Jan Beulich wrote:
>>>> On 07.06.16 at 15:59, <boris.ostrovsky@xxxxxxxxxx> wrote:
>> On 06/07/2016 02:17 AM, Jan Beulich wrote:
>>>>>> On 06.06.16 at 18:59, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>>> On 06/06/2016 09:29 AM, Jan Beulich wrote:
>>>>>>>> On 06.04.16 at 03:25, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>>>>> +#define RESERVED_MEMORY_DYNAMIC_START 0xFC001000
>>>>>> +#define ACPI_PHYSICAL_ADDRESS         0x000EA020
>>>>>> +
>>>>>> +/* Initial allocation for ACPI tables */
>>>>>> +#define NUM_ACPI_PAGES  16
>>>>> With which other definitions do these three need to remain in sync?
>>>> NUM_ACPI_PAGES is private to this file.
>>>>
>>>> ACPI_PHYSICAL_ADDRESS (RSDP pointer) needs to be between 0xe0000 and 
>>>> 0xfffff, I picked this number because that's where most systems that I 
>>>> have 
>>>> appear to have it. (And by "most" I mean the two that I checked ;-))
>>> With there not being a BIOS, I can see this being pretty arbitrary.
>>> Yet in that case I'm not convinced of this getting put at a random
>>> address in the middle. 
>> I can put it in the beginning, at 0xe0000.
> I'd rather see it put higher up, close below 1Mb.
>
>>> Plus I'm not sure I see the connection to the
>>> reservations done in the E820 map the guest gets to see.
>> I thought ACPI data is supposed to live in reserved areas (ACPI data,
>> actually)?
> Correct - but where is such an E820 entry being produced for the
> guest?

It's not. I actually mentioned this in cover letter.

I prototyped this at some point in libxl__arch_domain_construct_memmap().


>
>>>> RESERVED_MEMORY_DYNAMIC_START is one page after DSDT's SystemMemory (aka 
>>>> ACPI_INFO_PHYSICAL_ADDRESS). But then it looks like PVHv2 doesn't need 
>>>> SystemMemory so it can be anywhere (and e820 should presumably be aware of 
>>>> this, which it is not right now)
>>> So you say there's no connection to the end of hvmloader's window
>>> for PCI MMIO assignments (an equivalent of which is going to be
>>> needed for PVHv2)?
>> I haven't thought about this but then we don't have MMIO hole now. I can
>> try finding available memory chunk in guest's memory under 4G.
> Well, we first need to settle on the intended memory layout.
> And then we need to put this down in exactly one place, for all
> players to consume (and adhere to).

On the few systems that I looked at they are placed right before the
MMIO region.

How about (HVM_BELOW_4G_RAM_END - NUM_ACPI_PAGES)?


>
>>> But note that as soon as
>>> you report processors in MADT, the combined set of tables holding
>>> AML code can't be empty anymore: Processors need to be
>>> declared using Processor() (legacy) or Device(). Maybe we don't
>>> need as much as an ordinary HVM guest, but nothing seems too little.
>> I will add Processor.
> And did you check whether there's something else that's
> mandatory (or even just kind of, due to e.g. ACPI CA relying
> on it)?

I didn't check the spec but as far as ACPICA is concerned --- this was
tested with PVHv2 Linux and I think Roger ran this version on FreeBSD.

-boris




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.