[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 32/41] arm : acpi dynamically map mmio regions



On Fri, 31 Jul 2015, Julien Grall wrote:
> Hi Shannon,
> 
> On 31/07/15 02:30, Shannon Zhao wrote:
> > 
> > 
> > On 2015/7/31 2:31, Julien Grall wrote:
> >> On 30/07/15 19:02, Parth Dixit wrote:
> >>>                     instead of getting mmio information for individual
> >>> devices from dsdt, we will map all the non-ram regions described in
> >>> uefi. AML interpreter option was discussed earlier and it was decided
> >>> not to go with that approach. You can find more details in the linaro
> >>> xen wiki for the reasoning behind it.
> >>
> >> Which page are you talking about? I only found [1] speaking about ACPI.
> >> Although, there is nothing related to MMIO mapping.
> >>
> >> Anyway, it's not possible to get the list of MMIOs regions for the UEFI
> >> System Memory Map (see the mail you forward on the ML [2]).
> >>
> >> Although, based on the RAM region we could deduce a possible set of MMIO
> >> regions.
> > But I guess this will get the all regions except RAM region. And some of
> > the regions may not exist on hardware. Is it ok to map the non-exist
> > region to DOM0? Doesn't the map function fail?
> 
> I don't see a problem for it. I'm not sure what the others think about it.
> 
> The map function doesn't know if the physical region is valid or not.
> It's only setup the page table to allow the guest using the physical region.
> 
> If DOM0 is trying to access invalid region, it will receive a
> data/prefetch abort.

I don't think there is a problem with mapping inexistent memory to dom0.
The only issue is that mapping very large amounts of memory to dom0
takes a lot of memory to store the pagetables themselves in Xen. If we
do that we should definitely use page sizes greater than 4K: 2M or 1G.

I think that starting out with simply relying on the UEFI System Memory
Map would be OK, even though we know that it is not complete. I would
recommend to just do that in the next version of this series and leave
this problem for later. Although I think it should be solved before
completing this work, I wouldn't want everything else to get stuck
because of this. Maybe you could sort out the other issues while we
are still discussing this one.


One option going forward is to map MMIO regions in Dom0 on demand when
trapping in Xen with a data abort. Specifically in
xen/arch/arm/traps.c:do_trap_data_abort_guest we could check that the
guest is dom0 and that the address correspond to a non-ram region not
owned by Xen. If the checks succeed then we map the page in Dom0.


> >> It would be fine to mapped unused region in memory and we could
> >> take advantage of super page.
> >>
> >> While you are speaking about the wiki page. Can one of you update the
> >> wiki page about the boot protocol? Jan had some concerns about the
> >> solution you choose (see [3] to not mention the whole thread).
> >>
> > 
> > About the XENV table, from the discussions of the thread, it seems we
> > reach an agreement on using hypercall to tell DOM0 the grant table info
> > and event channel irq. Right?
> 
> People have different opinion on what should be the way to boot DOM0
> with ACPI on ARM. A design document would help here to understand what
> are the possibilities to boot DOM0 (i.e hypercall based, XENV...) and
> which one would be the most suitable for ARM.

As I wrote previously
(http://marc.info/?i=alpine.DEB.2.02.1505291102390.8130%40kaball.uk.xensource.com),
although I prefer tables, I am OK with hypercalls too, and for the sake
of moving this work forward in the quickest way possible, let's just do
that. This is a minor point on the grand scheme of things.

I suggest you introduce two new hvm params to get the grant table
address and event channel ppi, see xen/include/public/hvm/params.h. They
can be retrieved using the HVMOP_get_param hypercall.

Also remember that if we avoid the XENV table, then we need to set the
new FADT field "Hypervisor Vendor Identity" appropriately to advertise
the presence of Xen on the platform.


> >> We need to agree on the boot protocol before going further on this series.
> >>
> >> To speed up the upstreaming process, it would be nice if you start a
> >> thread about the boot protocol for ACPI with relevant people in CCed.
> >> The main goal will be to explain why you choose this way. This will be
> >> the base to talk about improvement and/or answer concerns for other people.
> >>
> > 
> > Ok, it's good to reach an agreement before action.
> > 
> >> FWIW, Jan also suggested a different boot protocol based on the x86 one.
> >> It may be good for you to see whether it would fit ACPI on ARM.
> >>
> > 
> > Which boot protocol? Could you point it out? Thanks. :)
> 
> The way to boot DOM0 with ACPI. There is a page on the Linaro wiki [1],
> but the content is quite out of date now.
> 
> Regards,
> 
> [1] https://wiki.linaro.org/LEG/Engineering/Virtualization/ACPI_on_Xen

http://marc.info/?i=1431893048-5214-1-git-send-email-parth.dixit%40linaro.org
is a good start, but it needs more details. The important thing to clear
out is which information is passed to Dom0 and how, because it will
become a supported external interface going forward.

Specifically:
- what information is passed via the small device tree to dom0 and in
  what format
- how the acpi tables are given to dom0
  * mapped or copied?
  * how do we pass a pointer to them to the kernel?
- if some tables are changed by Xen before passing them on, it would be
  good to list what was changed
  * what tables have been modified
  * what tables have been added
  * what tables have been removed
- how is the memory map passed to Dom0
  * how do we find out the list of MMIO regions, both temporary and
    future solutions
  * how to we tell dom0 where they are

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.