[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: correct repeat count update in linear->phys translation



On 07/09/17 11:41, Jan Beulich wrote:
> For the insn emulator's fallback logic in REP MOVS/STOS/INS/OUTS
> handling to work correctly, *reps must not be set to zero when
> returning X86EMUL_UNHANDLEABLE.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Why is this?  In the case that X86EMUL_UNHANDLEABLE is returned, the
emulator appears to override nr_reps to 1.

I'm clearly missing something.

>
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -566,15 +566,16 @@ static int hvmemul_linear_to_phys(
>              if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
>                  return X86EMUL_RETRY;
>              done /= bytes_per_rep;
> -            *reps = done;
>              if ( done == 0 )
>              {
>                  ASSERT(!reverse);
>                  if ( npfn != gfn_x(INVALID_GFN) )
>                      return X86EMUL_UNHANDLEABLE;
> +                *reps = 0;
>                  x86_emul_pagefault(pfec, addr & PAGE_MASK, 
> &hvmemul_ctxt->ctxt);

Independently to the issue at hand, this looks suspicious for the
reverse direction.

Hardware will issue a walk for the first byte of access, and optionally
a second at the start of the subsequent page for a straddled access. 
For the reverse case, this looks like it will truncate down to the start
of the lower linear address, which I bet isn't how hardware actually
behaves.

(In some copious free time, I should really put together a discontinuous
rep insn XTF test).

~Andrew

>                  return X86EMUL_EXCEPTION;
>              }
> +            *reps = done;
>              break;
>          }
>  
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.