|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 3/7] x86/altcall: Optimise away endbr64 instruction where possible
On 14.02.2022 13:56, Andrew Cooper wrote:
> @@ -330,6 +333,41 @@ static void init_or_livepatch _apply_alternatives(struct
> alt_instr *start,
> add_nops(buf + a->repl_len, total_len - a->repl_len);
> text_poke(orig, buf, total_len);
> }
> +
> + /*
> + * Clobber endbr64 instructions now that altcall has finished optimising
> + * all indirect branches to direct ones.
> + */
> + if ( force && cpu_has_xen_ibt )
Btw, this is now also entered when the function is called from
apply_alternatives() (i.e. when livepatching), but ...
> + {
> + void *const *val;
> + unsigned int clobbered = 0;
> +
> + /*
> + * This is some minor structure (ab)use. We walk the entire contents
> + * of .init.{ro,}data.cf_clobber as if it were an array of pointers.
> + *
> + * If the pointer points into .text, and at an endbr64 instruction,
> + * nop out the endbr64. This causes the pointer to no longer be a
> + * legal indirect branch target under CET-IBT. This is a
> + * defence-in-depth measure, to reduce the options available to an
> + * adversary who has managed to hijack a function pointer.
> + */
> + for ( val = __initdata_cf_clobber_start;
> + val < __initdata_cf_clobber_end;
... this being main binary boundaries, no action would be taken on
the livepatch binary. Hence (also due to having been here before
during boot), all that I understand will happen ...
> + val++ )
> + {
> + void *ptr = *val;
> +
> + if ( !is_kernel_text(ptr) || !is_endbr64(ptr) )
> + continue;
> +
> + add_nops(ptr, 4);
> + clobbered++;
> + }
> +
> + printk("altcall: Optimised away %u endbr64 instructions\n",
> clobbered);
... that this message be logged once per patch load (with a number
of 0). I think the enclosing if() wants to be amended by
"&& system_state < SYS_STATE_active". If you agree, I can easily
make a patch.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |