[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH] x86/mm: do not mark IO regions as Xen heap



> -----Original Message-----
> From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> Sent: 10 September 2020 14:35
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; 
> Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; Paul Durrant <paul@xxxxxxx>
> Subject: [PATCH] x86/mm: do not mark IO regions as Xen heap
> 
> arch_init_memory will treat all the gaps on the physical memory map
> between RAM regions as MMIO and use share_xen_page_with_guest in order
> to assign them to dom_io. This has the side effect of setting the Xen
> heap flag on such pages, and thus is_special_page would then return
> true which is an issue in epte_get_entry_emt because such pages will
> be forced to use write-back cache attributes.
> 
> Fix this by introducing a new helper to assign the MMIO regions to
> dom_io without setting the Xen heap flag on the pages, so that
> is_special_page will return false and the pages won't be forced to use
> write-back cache attributes.
> 
> Fixes: 81fd0d3ca4b2cd ('x86/hvm: simplify 'mmio_direct' check in 
> epte_get_entry_emt()')
> Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> Cc: Paul Durrant <paul@xxxxxxx>
> ---
>  xen/arch/x86/mm.c | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 35ec0e11f6..4daf4e038a 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -271,6 +271,18 @@ static l4_pgentry_t __read_mostly split_l4e;
>  #define root_pgt_pv_xen_slots ROOT_PAGETABLE_PV_XEN_SLOTS
>  #endif
> 
> +static void __init assign_io_page(struct page_info *page)
> +{
> +    set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), INVALID_M2P_ENTRY);
> +
> +    /* The incremented type count pins as writable. */
> +    page->u.inuse.type_info = PGT_writable_page | PGT_validated | 1;
> +
> +    page_set_owner(page, dom_io);
> +
> +    page->count_info |= PGC_allocated | 1;
> +}
> +
>  void __init arch_init_memory(void)
>  {
>      unsigned long i, pfn, rstart_pfn, rend_pfn, iostart_pfn, ioend_pfn;
> @@ -291,7 +303,7 @@ void __init arch_init_memory(void)
>       */
>      BUG_ON(pvh_boot && trampoline_phys != 0x1000);
>      for ( i = 0; i < 0x100; i++ )
> -        share_xen_page_with_guest(mfn_to_page(_mfn(i)), dom_io, SHARE_rw);
> +        assign_io_page(mfn_to_page(_mfn(i)));
> 
>      /* Any areas not specified as RAM by the e820 map are considered I/O. */
>      for ( i = 0, pfn = 0; pfn < max_page; i++ )
> @@ -332,7 +344,7 @@ void __init arch_init_memory(void)
>              if ( !mfn_valid(_mfn(pfn)) )
>                  continue;
> 
> -            share_xen_page_with_guest(mfn_to_page(_mfn(pfn)), dom_io, 
> SHARE_rw);
> +            assign_io_page(mfn_to_page(_mfn(pfn)));

Now these calls to share_xen_page_with_guest() are gone, can we change 
share_xen_page_with_guest() to ASSERT that PGC_xen_heap is already set, and 
avoid (needlessly) ORing it in?

  Paul


>          }
> 
>          /* Skip the RAM region. */
> --
> 2.28.0





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.