[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [for-4.8][PATCH v2 00/23] xen/arm: Rework the P2M code to follow break-before-make sequence



Hello all,

The ARM architecture mandates the use of a break-before-make sequence when
changing translation entries if the page table is shared between multiple
CPUs whenever a valid entry is replaced by another valid entry (see D4.7.1
in ARM DDI 0487A.j for more details).

The current P2M code does not respect this sequence and may result to
break coherency on some processors.

Adapting the current implementation to use break-before-make sequence would
imply some code duplication and more TLBs invalidations than necessary.
For instance, if we are replacing a 4KB page and the current mapping in
the P2M is using a 1GB superpage, the following steps will happen:
    1) Shatter the 1GB superpage into a series of 2MB superpages
    2) Shatter the 2MB superpage into a series of 4KB superpages
    3) Replace the 4KB page

As the current implementation is shattering while descending and install
the mapping before continuing to the next level, Xen would need to issue 3
TLB invalidation instructions which is clearly inefficient.

Furthermore, all the operations which modify the page table are using the
same skeleton. It is more complicated to maintain different code paths than
having a generic function that set an entry and take care of the break-before-
make sequence.

The new implementation is based on the x86 EPT one which, I think, fits
quite well for the break-before-make sequence whilst keeping the code
simple.

For all the changes see in each patch.

I have provided a branch based on upstream here:
git://xenbits.xen.org/people/julieng/xen-unstable.git branch p2m-v2

Comments are welcome.

Yours sincerely,

Cc: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
Cc: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
Cc: Shanker Donthineni <shankerd@xxxxxxxxxxxxxx>
Cc: Dirk Behme <dirk.behme@xxxxxxxxxxxx>
Cc: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>

Julien Grall (23):
  xen/arm: do_trap_instr_abort_guest: Move the IPA computation out of
    the switch
  xen/arm: p2m: Store in p2m_domain whether we need to clean the entry
  xen/arm: p2m: Rename parameter in p2m_{remove,write}_pte...
  xen/arm: p2m: Use typesafe gfn in p2m_mem_access_radix_set
  xen/arm: p2m: Add a back pointer to domain in p2m_domain
  xen/arm: traps: Move MMIO emulation code in a separate helper
  xen/arm: traps: Check the P2M before injecting a data/instruction
    abort
  xen/arm: p2m: Invalidate the TLBs when write unlocking the p2m
  xen/arm: p2m: Change the type of level_shifts from paddr_t to uint8_t
  xen/arm: p2m: Move the lookup helpers at the top of the file
  xen/arm: p2m: Introduce p2m_get_root_pointer and use it in
    __p2m_lookup
  xen/arm: p2m: Introduce p2m_get_entry and use it to implement
    __p2m_lookup
  xen/arm: p2m: Replace all usage of __p2m_lookup with p2m_get_entry
  xen/arm: p2m: Re-implement p2m_cache_flush using p2m_get_entry
  xen/arm: p2m: Make p2m_{valid,table,mapping} helpers inline
  xen/arm: p2m: Introduce a helper to check if an entry is a superpage
  xen/arm: p2m: Introduce p2m_set_entry and __p2m_set_entry
  xen/arm: p2m: Re-implement relinquish_p2m_mapping using
    p2m_{get,set}_entry
  xen/arm: p2m: Re-implement p2m_remove_using using p2m_set_entry
  xen/arm: p2m: Re-implement p2m_insert_mapping using p2m_set_entry
  xen/arm: p2m: Re-implement p2m_set_mem_access using
    p2m_{set,get}_entry
  xen/arm: p2m: Do not handle shattering in p2m_create_table
  xen/arm: p2m: Export p2m_*_lock helpers

 xen/arch/arm/domain.c      |    8 +-
 xen/arch/arm/p2m.c         | 1316 ++++++++++++++++++++++----------------------
 xen/arch/arm/traps.c       |  126 +++--
 xen/include/asm-arm/p2m.h  |   63 +++
 xen/include/asm-arm/page.h |    8 +
 5 files changed, 828 insertions(+), 693 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.