[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 59/65] x86/traps: Rework write_stub_trampoline() to not hardcode the jmp


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 3 Dec 2021 14:17:46 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=We+YTmrpzElMidYTz8f2f3DZP3Q1/nzvwAU2jGItKiA=; b=GzB8zfyIcsVNbXoO1/XG0bpkmmV2WZiZPyC/U3GRUHXL6r0cDXJpgkftCOLJTOgx9cxa7TETVJiMDp5uVGG7ZrmrtXEvnZQbG99nSVG1xYGOjFgsyRM8+69anYbxsT1ADZbH5/1X8dfOsQ5XPmDnS4dUtCtnr0qRdFJR/7VBcd57nTlvkT2QAA1v/m85qNs75jsR8smwGQHzA5hjpc1v7BhJiykAWivFZhFuTFAEkU9Dn79OIOd9/vZiJ2kInxb8r1UmEDpMpr4zwO0Q/AU+9Wl9dQ9QJkZla8bDByacKLOPjR+PCCP+s6hmD3eMPaeohoRksRWfGOt0mBBR7T+HFA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VWOoLeNCL/EgPxud9lB6L7LNgD1LQihjVvp+g4EC+34/oBuToXND4+FNoGl7S3kfFYMGVSY4a3mU33GBTOVH9XYfAObyXdBKREJZuaaZZaBUISEdP4b35L7vakQVv/OXTf1MPXJdSV1Qb+7ii/1i9GIYFqRZjDrV3w1dx6SaX6ddic8O3pg9hjRbHj2RE6hn7/TsAA68WeE6TzdWp5ynSz9Bm8AlnSp1SOILhLpVZIMbmhQe0RLJrnsNSVemXJ8jPq3YbfdiFBlFKOIV4OQKPHGY5nW+LlPPbBWnXZOT0aofDs8Eee4MKDXY32hfgjP8NXNkuO6u7ny0O/tNLLdB5Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 03 Dec 2021 13:18:14 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 26.11.2021 13:34, Andrew Cooper wrote:
> For CET-IBT, we will need to optionally insert an endbr64 instruction at the
> start of the stub.  Don't hardcode the jmp displacement assuming that it
> starts at byte 24 of the stub.
> 
> Also add extra comments describing what is going on.  The mix of %rax and %rsp
> is far from trivial to follow.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> ---
> CC: Jan Beulich <JBeulich@xxxxxxxx>
> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> CC: Wei Liu <wl@xxxxxxx>
> ---
>  xen/arch/x86/x86_64/traps.c | 36 ++++++++++++++++++++++--------------
>  1 file changed, 22 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
> index d661d7ffcaaf..6f3c65bedc7a 100644
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -293,30 +293,38 @@ static unsigned int write_stub_trampoline(
>      unsigned char *stub, unsigned long stub_va,
>      unsigned long stack_bottom, unsigned long target_va)
>  {
> +    unsigned char *p = stub;
> +
> +    /* Store guest %rax into %ss slot */
>      /* movabsq %rax, stack_bottom - 8 */
> -    stub[0] = 0x48;
> -    stub[1] = 0xa3;
> -    *(uint64_t *)&stub[2] = stack_bottom - 8;
> +    *p++ = 0x48;
> +    *p++ = 0xa3;
> +    *(uint64_t *)p = stack_bottom - 8;
> +    p += 8;
>  
> +    /* Store guest %rsp in %rax */
>      /* movq %rsp, %rax */
> -    stub[10] = 0x48;
> -    stub[11] = 0x89;
> -    stub[12] = 0xe0;
> +    *p++ = 0x48;
> +    *p++ = 0x89;
> +    *p++ = 0xe0;
>  
> +    /* Switch to Xen stack */
>      /* movabsq $stack_bottom - 8, %rsp */
> -    stub[13] = 0x48;
> -    stub[14] = 0xbc;
> -    *(uint64_t *)&stub[15] = stack_bottom - 8;
> +    *p++ = 0x48;
> +    *p++ = 0xbc;
> +    *(uint64_t *)p = stack_bottom - 8;
> +    p += 8;
>  
> +    /* Store guest %rsp into %rsp slot */
>      /* pushq %rax */
> -    stub[23] = 0x50;
> +    *p++ = 0x50;
>  
>      /* jmp target_va */
> -    stub[24] = 0xe9;
> -    *(int32_t *)&stub[25] = target_va - (stub_va + 29);
> +    *p++ = 0xe9;
> +    *(int32_t *)p = target_va - (stub_va + (p - stub) + 4);
> +    p += 4;
>  
> -    /* Round up to a multiple of 16 bytes. */
> -    return 32;
> +    return p - stub;
>  }

I'm concerned of you silently discarding the aligning to 16 bytes here.
Imo this really needs justifying, or perhaps even delaying until a
later change. I'd like to point out though that we may not have a space
issue here at all, as I think we can replace one of the MOVABSQ (using
absolute numbers to hopefully make this easier to follow):

    movabsq %rax, stack_bottom - 8
    movq %rsp, %rax
    movq -18(%rip), %rsp
    pushq %rax
    jmp target_va

This totals to 26 bytes, leaving enough room for ENDBR64 without crossing
the 32-byte boundary. But I fear you may eat me for using insn bytes as
data ...

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.