[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 16/17] libxc/xc_dom_arm: Copy ACPI tables to guest space



On Wed, Aug 03, 2016 at 08:20:18PM +0100, Julien Grall wrote:
> Hi Wei,
> 
> On 02/08/16 12:01, Wei Liu wrote:
> >On Thu, Jul 28, 2016 at 08:42:05PM +0800, Shannon Zhao wrote:
> >>On 2016年07月28日 19:06, Julien Grall wrote:
> >>>On 26/07/16 02:17, Boris Ostrovsky wrote:
> >>>>On 07/25/2016 07:40 PM, Stefano Stabellini wrote:
> >>>>>On Mon, 25 Jul 2016, Boris Ostrovsky wrote:
> >>>>>>On 07/25/2016 06:06 PM, Stefano Stabellini wrote:
> >>>>>>>On Mon, 25 Jul 2016, George Dunlap wrote:
> >>>>>>>>On Thu, Jul 21, 2016 at 10:15 PM, Stefano Stabellini
> >>>>>>>><sstabellini@xxxxxxxxxx> wrote:
> >>>>>>>Going back to the discussion about how to account for the ACPI blob in
> >>>>>>>maxmem, let's make this simple, if we increase maxmem by the size of
> >>>>>>>the
> >>>>>>>ACPI blob:
> >>>>>>>
> >>>>>>>- the toolstack allocates more RAM than expected (bad)
> >>>>>>>- when the admin specifies 1GB of RAM, the guest actually gets 1GB of
> >>>>>>>   usable RAM (good)
> >>>>>>>- things are faster as Xen and the guest can exploit superpage
> >>>>>>>mappings
> >>>>>>>   more easily at stage-1 and stage-2 (good)
> >>>>>>>
> >>>>>>>Let's call this option A.
> >>>>>>>
> >>>>>>>If we do not increase maxmem:
> >>>>>>>
> >>>>>>>- the toolstack allocates less RAM, closer to the size specified in
> >>>>>>>the
> >>>>>>>   VM config file (good)
> >>>>>>>- the guest gets less usable memory than expected, less than what was
> >>>>>>>   specified in the VM config file (bad)
> >>>>>>
> >>>>>>Not sure I agree with this, at least for x86/Linux: guest gets 1GB of
> >>>>>>usable RAM and part of that RAM stores ACPI stuff. Guest is free to
> >>>>>>stash ACPI tables somewhere else or ignore them altogether and use that
> >>>>>>memory for whatever it wants.
> >>>>>On ARM it will be a ROM (from guest POV)
> >>>>
> >>>>
> >>>>In which case I don't see why we should take it from maxmem allocation.
> >>>>I somehow thought that there was a choice of whether to put it in ROM or
> >>>>RAM on ARM but if it's ROM only then I don't think there is an option.
> >>>
> >>>We have option to do the both on ARM. I just feel that the ROM option is
> >>>a cleaner interface because the ACPI tables are not supposed be modified
> >>>by the guest, so we can prevent to be overridden (+ all the advantages
> >>>mentioned by Stefano with option A).
> >>>
> >>>>IIUIC the toolstack pretends that the blob goes to memory because that's
> >>>>how its interfaces work but that space is not really what we think about
> >>>>when we set memory/maxmem in the configuration file. Unlike x86.
> >>>
> >>>I think we need to draw a conclusion for Shannon to continue to do the
> >>>work and I would like to see this series in Xen 4.8. From my
> >>>understanding you are for option B, so does George.
> >>>
> >>>Stefano votes for option A, but find B acceptable. Any other opinions?
> >>I agree with Stefano, both are fine.
> >>
> >
> >Sorry for the late reply.
> >
> >Are you now unblocked? If not, what is not yet decided or needed
> >clarification?
> 
> I don't think there was a strict consensus. I think this is something we can
> revisit later if necessary as the guest interface does not tie up to a
> specific physical address (The UEFI firmware should retrieve the information
> from the device tree).
> 
> So, Shannon could continue towards solution A. I.e the ACPI blob is loaded
> outside of the guest RAM?
> 

I'm fine with that, the bottom line is everything should be documented
so that we can confidently make changes later (or confidently refuse to
make changes, heh).

(Given the chance I would still prefer a unified model)

> If someone disagree please speak up. But we should unblock Shannon to get
> this series in Xen 4.8.

Yes, I agree.

Wei.

> 
> Regards,
> 
> -- 
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.