[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 2/2] x86/Xen/32: simplify xen_iret_crit_fixup's ring check



This can be had with two instead of six insns, by just checking the high
CS.RPL bit.

Also adjust the comment - there would be no #GP in the mentioned cases,
as there's no segment limit violation or alike. Instead there'd be #PF,
but that one reports the target EIP of said branch, not the address of
the branch insn itself.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
An alternative would be to keep using SEGMENT_RPL_MASK, but follow it
with "jpe".

--- a/arch/x86/xen/xen-asm_32.S
+++ b/arch/x86/xen/xen-asm_32.S
@@ -153,22 +153,15 @@ hyper_iret:
  * it's still on stack), we need to restore its value here.
  */
 ENTRY(xen_iret_crit_fixup)
-       pushl %ecx
        /*
         * Paranoia: Make sure we're really coming from kernel space.
         * One could imagine a case where userspace jumps into the
         * critical range address, but just before the CPU delivers a
-        * GP, it decides to deliver an interrupt instead.  Unlikely?
-        * Definitely.  Easy to avoid?  Yes.  The Intel documents
-        * explicitly say that the reported EIP for a bad jump is the
-        * jump instruction itself, not the destination, but some
-        * virtual environments get this wrong.
+        * PF, it decides to deliver an interrupt instead.  Unlikely?
+        * Definitely.  Easy to avoid?  Yes.
         */
-       movl 3*4(%esp), %ecx            /* nested CS */
-       andl $SEGMENT_RPL_MASK, %ecx
-       cmpl $USER_RPL, %ecx
-       popl %ecx
-       je 2f
+       testb $2, 2*4(%esp)             /* nested CS */
+       jnz 2f
 
        /*
         * If eip is before iret_restore_end then stack


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.