[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: don't retain emulated insn cache when exiting back to guest



Hi Jan,

I guess I have been CCed because you would like this patch is fixing the regression you mentioned on IRC?

Cheers,

On 12/05/2017 04:13 PM, Jan Beulich wrote:
vio->mmio_retry is being set when a repeated string insn is being split
up. In that case we'll exit to the guest, expecting immediate re-entry.
Interruptions, however, may be serviced by the guest before re-entry
from the repeated string insn. Any emulation needed in the course of
handling the interruption must not fetch from the internally maintained
cache.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2109,20 +2109,22 @@ static int _hvm_emulate_one(struct hvm_e
vio->mmio_retry = 0; - rc = x86_emulate(&hvmemul_ctxt->ctxt, ops);
-
-    if ( rc == X86EMUL_OKAY && vio->mmio_retry )
-        rc = X86EMUL_RETRY;
-    if ( rc != X86EMUL_RETRY )
+    switch ( rc = x86_emulate(&hvmemul_ctxt->ctxt, ops) )
      {
+    case X86EMUL_OKAY:
+        if ( vio->mmio_retry )
+            rc = X86EMUL_RETRY;
+        /* fall through */
+    default:
          vio->mmio_cache_count = 0;
          vio->mmio_insn_bytes = 0;
-    }
-    else
-    {
+        break;
+
+    case X86EMUL_RETRY:
          BUILD_BUG_ON(sizeof(vio->mmio_insn) < sizeof(hvmemul_ctxt->insn_buf));
          vio->mmio_insn_bytes = hvmemul_ctxt->insn_buf_bytes;
          memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, vio->mmio_insn_bytes);
+        break;
      }
if ( hvmemul_ctxt->ctxt.retire.singlestep )




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.