[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 15/16] x86/entry: Adjust guest paths to be shadow stack compatible

On 11.05.2020 23:45, Andrew Cooper wrote:
> On 07/05/2020 17:15, Jan Beulich wrote:
>>>>> --- a/xen/arch/x86/x86_64/entry.S
>>>>> +++ b/xen/arch/x86/x86_64/entry.S
>>>>> @@ -194,6 +194,15 @@ restore_all_guest:
>>>>>          movq  8(%rsp),%rcx            # RIP
>>>>>          ja    iret_exit_to_guest
>>>>> +        /* Clear the supervisor shadow stack token busy bit. */
>>>>> +.macro rag_clrssbsy
>>>>> +        push %rax
>>>>> +        rdsspq %rax
>>>>> +        clrssbsy (%rax)
>>>>> +        pop %rax
>>>>> +.endm
>>>>> +        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>>>> In principle you could get away without spilling %rax:
>>>>         cmpl  $1,%ecx
>>>>         ja    iret_exit_to_guest
>>>>         /* Clear the supervisor shadow stack token busy bit. */
>>>> .macro rag_clrssbsy
>>>>         rdsspq %rcx
>>>>         clrssbsy (%rcx)
>>>> .endm
>>>>         ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>>>>         movq  8(%rsp),%rcx            # RIP
>>>>         cmpw  $FLAT_USER_CS32,16(%rsp)# CS
>>>>         movq  32(%rsp),%rsp           # RSP
>>>>         je    1f
>>>>         sysretq
>>>> 1:      sysretl
>>>>         ALIGN
>>>> /* No special register assumptions. */
>>>> iret_exit_to_guest:
>>>>         movq  8(%rsp),%rcx            # RIP
>>>>         andl  $~(X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_VM),24(%rsp)
>>>>         ...
>>>> Also - what about CLRSSBSY failing? It would seem easier to diagnose
>>>> this right here than when getting presumably #DF upon next entry into
>>>> Xen. At the very least I think it deserves a comment if an error case
>>>> does not get handled.
>>> I did consider this, but ultimately decided against it.
>>> You can't have an unlikely block inside a alternative block because the
>>> jmp's displacement doesn't get fixed up.
>> We do fix up unconditional JMP/CALL displacements; I don't
>> see why we couldn't also do so for conditional ones.
> Only for the first instruction in the block.
> We do not decode the entire block of instructions and fix up each
> displacement.

Right, but that's not overly difficult to overcome - simply split
the ALTERNATIVE in two.

>>>   Keeping everything inline puts
>>> an incorrect statically-predicted branch in program flow.
>>> Most importantly however, is that the SYSRET path is vastly less common
>>> than the IRET path.  There is no easy way to proactively spot problems
>>> in the IRET path, which means that conditions leading to a problem are
>>> already far more likely to manifest as #DF, so there is very little
>>> value in adding complexity to the SYSRET path in the first place.
>> The SYSRET path being uncommon is a problem by itself imo, if
>> that's indeed the case. I'm sure I've suggested before that
>> we convert frames to TRAP_syscall ones whenever possible,
>> such that we wouldn't go the slower IRET path.
> It is not possible to convert any.
> The opportunistic SYSRET logic in Linux loses you performance in
> reality.  Its just that the extra conditionals are very highly predicted
> and totally dominated by the ring transition cost.
> You can create a synthetic test case where the opportunistic logic makes
> a performance win, but the chances of encountering real world code where
> TRAP_syscall is clear and %r11 and %rcx match flags/rip is 2 ^ 128.
> It is very much not worth the extra code and cycles taken to implement.

Oops, yes, for a moment I forgot this minor detail of %rcx/%r11.

>>>> Somewhat similar for SETSSBSY, except there things get complicated by
>>>> it raising #CP instead of setting EFLAGS.CF: Aiui it would require us
>>>> to handle #CP on an IST stack in order to avoid #DF there.
>>> Right, but having #CP as IST gives us far worse problems.
>>> Being able to spot #CP vs #DF doesn't help usefully.  Its still some
>>> arbitrary period of time after the damage was done.
>>> Any nesting of #CP (including fault on IRET out) results in losing
>>> program state and entering an infinite loop.
>>> The cases which end up as #DF are properly fatal to the system, and we
>>> at least get a clean crash out it.
>> May I suggest that all of this gets spelled out in at least
>> the description of the patch, so that it can be properly
>> understood (and, if need be, revisited) later on?
> Is this really the right patch to do that?
> I do eventually plan to put a whole load of this kinds of details into
> the hypervisor guide.

Well, as you can see having some of these considerations and
decisions spelled out would already have helped review here.
Whether this is exactly the right patch I'm not sure, but I'd
find it quite helpful if such was available at least for
cross referencing.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.