[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN v9 3/4] xen/arm64: io: Handle the abort due to access to stage1 translation table



On Fri, 4 Mar 2022, Ayan Kumar Halder wrote:
> On 04/03/2022 01:43, Stefano Stabellini wrote:
> > On Tue, 1 Mar 2022, Ayan Kumar Halder wrote:
> > > If the abort was caused due to access to stage1 translation table, Xen
> > > will assume that the stage1 translation table is in the non MMIO region.
> > > It will try to resolve the translation fault. If it succeeds, it will
> > > return to the guest to retry the instruction. If not, then it means
> > > that the table is in MMIO region which is not expected by Xen. Thus,
> > > Xen will forward the abort to the guest.
> > > 
> > > Signed-off-by: Ayan Kumar Halder <ayankuma@xxxxxxxxxx>
> > > ---
> > > 
> > > Changelog :-
> > > 
> > > v1..v8 - NA
> > > 
> > > v9 - 1. Extracted this change from "[XEN v8 2/2] xen/arm64: io: Support
> > > instructions (for which ISS is not..." into a separate patch of its own.
> > > The reason being this is an existing bug in the codebase.
> > > 
> > >   xen/arch/arm/io.c    | 11 +++++++++++
> > >   xen/arch/arm/traps.c | 12 +++++++++++-
> > >   2 files changed, 22 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
> > > index bea69ffb08..ebcb8ed548 100644
> > > --- a/xen/arch/arm/io.c
> > > +++ b/xen/arch/arm/io.c
> > > @@ -128,6 +128,17 @@ void try_decode_instruction(const struct
> > > cpu_user_regs *regs,
> > >           return;
> > >       }
> > >   +    /*
> > > +     * At this point, we know that the stage1 translation table is in the
> > > MMIO
> > > +     * region. This is not expected by Xen and thus it forwards the abort
> > > to the
> > > +     * guest.
> > > +     */
> > > +    if ( info->dabt.s1ptw )
> > > +    {
> > > +        info->dabt_instr.state = INSTR_ERROR;
> > > +        return;
> > > +    }
> > > +
> > >       /*
> > >        * Armv8 processor does not provide a valid syndrome for decoding
> > > some
> > >        * instructions. So in order to process these instructions, Xen must
> > > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > > index 120c971b0f..e491ca15d7 100644
> > > --- a/xen/arch/arm/traps.c
> > > +++ b/xen/arch/arm/traps.c
> > > @@ -1923,6 +1923,7 @@ static void do_trap_stage2_abort_guest(struct
> > > cpu_user_regs *regs,
> > >       bool is_data = (hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
> > >       mmio_info_t info;
> > >       enum io_state state;
> > > +    bool check_mmio_region = true;
> > >         /*
> > >        * If this bit has been set, it means that this stage-2 abort is
> > > caused
> > > @@ -1987,7 +1988,16 @@ static void do_trap_stage2_abort_guest(struct
> > > cpu_user_regs *regs,
> > >            */
> > >           if ( !is_data || !info.dabt.valid )
> > >           {
> > > -            if ( check_p2m(is_data, gpa) )
> > > +            /*
> > > +             * If the translation fault was caused due to access to stage
> > > 1
> > > +             * translation table, then we try to set the translation
> > > table entry
> > > +             * for page1 translation table (assuming that it is in the
> > > non mmio
> >                        ^ stage1
> > 
> > Do you mean to say maybe:
> Yes, it should be stage1. Sorry for typo.
> > 
> > If the translation fault was caused by an access to stage 1 translation
> > table, then no need to change the stage 2 p2m.
> > 
> > ?
> 
> The translation fault was caused due to access to stage1 translation table. As
> per my understanding, the address of stage1 tables is in stage2 translation
> table entries. Thus, Xen needs to modify the corresponding stage2 p2m entries.

OK, I follow what you are saying and what this patch is doing now. I suggest:

If the translation fault was caused due to access to the stage 1
translation table, then we try to set the p2m entry for the stage 1
translation table, but we don't handle stage 1 translation tables in
MMIO regions.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.