[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v11 2/5] x86emul: New return code for unimplemented instruction



On Ma, 2017-09-19 at 09:19 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 12.09.17 at 16:32, <ppircalabu@xxxxxxxxxxxxxxx> wrote:
> > Enforce the distinction between an instruction not implemented by
> > the
> > emulator and the failure to emulate that instruction by defining a
> > new
> > return code, X86EMUL_UNIMPLEMENTED.
> >
> > This value should only be returned by the core emulator only if it
> > fails to
> > properly decode the current instruction's opcode, and not by any of
> > other
> > functions, such as the x86_emulate_ops or the hvm_io_ops callbacks.
> >
> > e.g. hvm_process_io_intercept should not return
> > X86EMUL_UNIMPLEMENTED.
> > The return value of this function depends on either the return code
> > of
> > one of the hvm_io_ops handlers (read/write) or the value returned
> > by
> > hvm_copy_guest_from_phys / hvm_copy_to_guest_phys.
> >
> > Similary, none of this functions should not return
> > X86EMUL_UNIMPLEMENTED.
> I think someone had already pointed out the strange double
> negation here.
Will be fixed in the next patchset iteration.
>
> >
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -192,6 +192,8 @@ static int hvmemul_do_io(
> >      ASSERT(p.count <= *reps);
> >      *reps = vio->io_req.count = p.count;
> >
> > +    ASSERT(rc != X86EMUL_UNIMPLEMENTED);
> > +
> >      switch ( rc )
> >      {
> >      case X86EMUL_OKAY:
> The assertion want to move into the switch(), making use
> of ASSERT_UNREACHABLE().
>
Will be fixed in the next patchset iteration.
> >
> > @@ -2045,6 +2054,7 @@ int hvm_emulate_one_mmio(unsigned long mfn,
> > unsigned long gla)
> >      switch ( rc )
> >      {
> >      case X86EMUL_UNHANDLEABLE:
> > +    case X86EMUL_UNIMPLEMENTED:
> >          hvm_dump_emulation_state(XENLOG_G_WARNING, "MMCFG",
> > &ctxt);
> >          break;
> I would have preferred if, just like you do here, ...
>
> >
> > @@ -2102,6 +2112,7 @@ void hvm_emulate_one_vm_event(enum emul_kind
> > kind, unsigned int trapnr,
> >           * consistent with X86EMUL_RETRY.
> >           */
> >          return;
> > +    case X86EMUL_UNIMPLEMENTED:
> >      case X86EMUL_UNHANDLEABLE:
> >          hvm_dump_emulation_state(XENLOG_G_DEBUG, "Mem event",
> > &ctx);
> ... you had added the new case label below existing ones uniformly.
> But anyway.
The order is reversed in this case because the patch 4/5 from this
series adds the hvm_monitor_emul_unimplemented call and the "Fall-
through" in case of failure. If I change the default order here, I will
have to reverse it when adding the monitor notification handling.
>
> >
> > @@ -2585,7 +2586,7 @@ x86_decode(
> >                          d = twobyte_table[0x3a].desc;
> >                          break;
> >                      default:
> > -                        rc = X86EMUL_UNHANDLEABLE;
> > +                        rc = X86EMUL_UNIMPLEMENTED;
> >                          goto done;
> >                      }
> >                  }
> > @@ -2599,7 +2600,7 @@ x86_decode(
> >                  }
> >                  else
> >                  {
> > -                    rc = X86EMUL_UNHANDLEABLE;
> > +                    rc = X86EMUL_UNIMPLEMENTED;
> >                      goto done;
> At least these two should be "unrecognized" now.
Will be fixed in the next patchset iteration.
>
> >
> > @@ -2879,7 +2880,7 @@ x86_decode(
> >
> >      default:
> >          ASSERT_UNREACHABLE();
> > -        return X86EMUL_UNHANDLEABLE;
> > +        return X86EMUL_UNIMPLEMENTED;
> >      }
> This one, otoh, is probably fine this way for now.
>
> >
> > @@ -6195,7 +6196,7 @@ x86_emulate(
> >                  /* vpsll{w,d} $imm8,{x,y}mm,{x,y}mm */
> >              break;
> >          default:
> > -            goto cannot_emulate;
> > +            goto unimplemented_insn;
> >          }
> This again wants to be "unrecognized".
In this case "unrecognized" cannot be returned (yet) as instructions
VPRORD and VPROLD are not implemented.
http://sandpile.org/x86/opc_grp.htm (group #13 (EVEX 66h) (0Fh,72h) )

>
> >
> > @@ -6243,7 +6244,7 @@ x86_emulate(
> >          case 6: /* psllq $imm8,mm */
> >              goto simd_0f_shift_imm;
> >          }
> > -        goto cannot_emulate;
> > +        goto unimplemented_insn;
> And this too. Together with previous discussion I think you should
> now see the pattern for everything further down from here.
Will be fixed in the next patchset iteration.
>
> >
> > --- a/xen/arch/x86/x86_emulate/x86_emulate.h
> > +++ b/xen/arch/x86/x86_emulate/x86_emulate.h
> > @@ -133,6 +133,18 @@ struct x86_emul_fpu_aux {
> >    * Undefined behavior when used anywhere else.
> >    */
> >  #define X86EMUL_DONE           4
> > + /*
> > +  * Current instruction is not implemented by the emulator.
> > +  * This value should only be returned by the core emulator if
> > decode fails
> Why "if decode fails"? In that case it's more "unrecognized" than
> "unimplemented"; the latter can only ever arise (long term, i.e.
> once we have proper distinction of the two) if we successfully
> decoded an insn, but have no code to actually handle it.
>
> >
> > +  * and not by any of the x86_emulate_ops callbacks.
> > +  * If this error code is returned by a function, an #UD trap
> > should be
> > +  * raised by the final consumer of it.
> This last sentence would now really belong to
> X86EMUL_UNRECOGNIZED. As explained earlier, raising #UD
> for unimplemented is precisely the wrong choice architecturally,
> we merely tolerate doing so for the time being.
>
Will be fixed in the next patchset iteration.
> Jan
>
> ________________________
> This email was scanned by Bitdefender
Many thanks for your support,
//Petre

________________________
This email was scanned by Bitdefender
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.