[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN v1] xen/arm: io: Check ESR_EL2.ISV != 0 before searching for a MMIO handler



On Thu, 27 Jan 2022 at 22:40, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote:
> I am with you on both points.
>
> One thing I noticed is that the code today is not able to deal with
> IO_UNHANDLED for MMIO regions handled by IOREQ servers or Xen MMIO
> emulator handlers. p2m_resolve_translation_fault and try_map_mmio are
> called after try_handle_mmio returns IO_UNHANDLED but try_handle_mmio is
> not called a second time (or am I mistaken?)

Why would you need it? If try_mmio_fault() doesn't work the first time, then
it will not work the second it.

>
> Another thing I noticed is that currently find_mmio_handler and
> try_fwd_ioserv expect dabt to be already populated and valid so it would
> be better if we could get there only when dabt.valid.
>
> With these two things in mind, I think maybe the best thing to do is to
> change the code in do_trap_stage2_abort_guest slightly so that
> p2m_resolve_translation_fault and try_map_mmio are called first when
> !dabt.valid.

An abort will mostly likely happen because of emulated I/O. If we call
p2m_resolve_translation_fault() and try_map_mmio() first, then it means
the processing will take longer than necessary for the common case.

So I think we want to keep the order as it is. I.e first trying the MMIO
and then falling back to the less likely reason for a trap.

>
> Patch below only for explaning; it doesn't build as is and I am not sure
> it is 100% correct. For instance, if try_map_mmio succeeds, should we
> return or goto again?

If try_map_mmio() succeeds then it means we create a valid entry
in the stage-2. Therefore, we want to return to the guest and
try to execute the instruction.

The same is valid for p2m_resolve_translation_fault(). If it works
then it means we successfully set the valid bit in an entry. So
we want to re-try directly.

Cheers,



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.