[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.9] xen/arm: acpi: Map MMIO on fault in stage-2 page table for the hardware domain



On Thu, 30 Mar 2017, Julien Grall wrote:
> When booting using ACPI, not all MMIOs can be discovered by parsing the
> static tables or the UEFI memory map. A lot of them will be described in
> the DSDT. However, Xen does not have an AML parser which requires us to
> find a different approach.
> 
> During the first discussions on supporting ACPI (see design doc [1]), it
> was decided to rely on the hardware domain to make a request to the
> hypervisor to map the MMIO region in stage-2 page table before accessing
> it. This approach works fine if the OS has limited hooks to modify the
> page tables.
> 
> In the case of Linux kernel, notifiers have been added to map
> the MMIO regions when adding a new AMBA/platform device. Whilst this is
> covering most of the MMIOs, some of them (e.g OpRegion, ECAM...) are not
> related to a specific device or the driver is not using the
> AMBA/platform API. So more hooks would need to be added in the code.
> 
> Various approaches have been discussed (see [2]), one of them was to
> create stage-2 mappings seamlessly in Xen upon hardware memory faults.
> This approach was first ruled out because it relies on the hardware
> domain to probe the region before any use. So this would not work when
> DMA'ing to another device's MMIO region when the device is protected by
> an SMMU. It has been pointed out that this is a limited use case compare
> to DMA'ing between MMIO and RAM.
> 
> This patch implements this approach. All MMIOs region will be mapped in
> stage-2 using p2m_mmio_direct_c (i.e normal memory outer and inner
> write-back cacheable). The stage-1 page table will be in control of the
> memory attribute. This is fine because the hardware domain is a trusted
> domain.
> 
> Note that MMIO will only be mapped on a data abort fault. It is assumed
> that it will not be possible to execute code from MMIO
> (p2m_mmio_direct_c will forbid that).
> 
> As mentioned above, this solution will cover most of the cases. If a
> platform requires to do DMA'ing to another device's MMIO region without
> any access performed by the OS. Then it will be expected to have
> specific platform code in the hypervisor to map the MMIO at boot time or
> the OS to use the existing hypercalls (i.e XENMEM_add_to_add_physmap{,_batch})
> before any access.
> 
> [1] https://lists.xen.org/archives/html/xen-devel/2015-11/msg00488.html
> [2] https://marc.info/?l=linux-arm-kernel&m=148469169210500&w=2
> 
> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
> 
> ---
>     This patch is a candidate for Xen 4.9. Whilst the last posting date
>     was few days ago, I think it would be good to have this patch in the
>     next release. Although ACPI support for ARM64 is still a technical
>     preview, it would help servers to test Xen upstream on their platform
>     with ACPI and provide feedback on missing features.
> 
>     All the code is gated by !acpi_disabled and therefore will not be
>     executed on when using Device Tree.
> 
>     RM hat on: I will leave the decision to Stefano.
> 
>     Shanker: You mentioned offline that you tested the patch. May I add
>     your tested-by?
> 
> ---
>  xen/arch/arm/traps.c | 34 ++++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 614501f761..ebf915bb3b 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -32,6 +32,7 @@
>  #include <xen/perfc.h>
>  #include <xen/virtual_region.h>
>  #include <xen/mem_access.h>
> +#include <xen/iocap.h>
>  #include <public/sched.h>
>  #include <public/xen.h>
>  #include <asm/debugger.h>
> @@ -49,6 +50,7 @@
>  #include <asm/gic.h>
>  #include <asm/vgic.h>
>  #include <asm/cpuerrata.h>
> +#include <asm/acpi.h>
>  
>  /* The base of the stack must always be double-word aligned, which means
>   * that both the kernel half of struct cpu_user_regs (which is pushed in
> @@ -2534,6 +2536,35 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
>      return !!handle_mmio(info);
>  }
>  
> +/*
> + * When using ACPI, most of the MMIO regions will be mapped on-demand
> + * in stage-2 page tables for the hardware domain because Xen is not
> + * able to know from the EFI memory map the MMIO regions.
> + */
> +static bool try_map_mmio(gfn_t gfn)
> +{
> +    struct domain *d = current->domain;
> +
> +    /* For the hardware domain, all MMIOs are mapped with GFN == MFN */
> +    mfn_t mfn = _mfn(gfn_x(gfn));
> +
> +    /*
> +     * Device-Tree should already have everything mapped when building
> +     * the hardware domain.
> +     */
> +    if ( acpi_disabled )
> +        return false;
> +
> +    if ( d != hardware_domain )

is_hardware_domain(d)


> +        return false;
> +
> +    /* The hardware domain can only map permitted MMIO regions */
> +    if ( !iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + 1) )
> +        return false;

Because of the potential of causing problems with DMA to device MMIO,
I would add a warning here, something like:

   dprintk(XENLOG_WARNING, "Mapping gfn=mfn=%"PRI_gfn" to Dom%d\n", 
d->domain_id);

I hope that it will help us debug issues when/if we'll actually
encounter such a case.


> +    return !map_regions_p2mt(d, gfn, 1, mfn, p2m_mmio_direct_c);
> +}
> +
>  static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>                                       const union hsr hsr)
>  {
> @@ -2611,6 +2642,9 @@ static void do_trap_data_abort_guest(struct 
> cpu_user_regs *regs,
>          if ( !mfn_eq(mfn, INVALID_MFN) )
>              return;
>  
> +        if ( try_map_mmio(_gfn(paddr_to_pfn(info.gpa))) )
> +            return;
> +
>          break;
>      default:
>          gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.