[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/4] x86: eliminate most XPTI entry/exit code when it's not in use
>>> On 30.01.18 at 13:02, <andrew.cooper3@xxxxxxxxxx> wrote: > On 23/01/18 10:37, Jan Beulich wrote: >> Introduce a synthetic feature flag to use alternative instruction >> patching to NOP out all code on entry/exit paths other than those >> involved in NMI/#MC handling (the patching logic can't properly handle >> those paths yet). Having NOPs here is generally better than using >> conditional branches. > > Given my other series, I'd prefer to fix the IST paths rather than > introduce yet-more workarounds. More workarounds? The patch simply avoids touching the IST path. >> --- a/xen/arch/x86/x86_64/compat/entry.S >> +++ b/xen/arch/x86/x86_64/compat/entry.S >> @@ -189,7 +189,7 @@ ENTRY(compat_post_handle_exception) >> >> /* See lstar_enter for entry register state. */ >> ENTRY(cstar_enter) >> - /* sti could live here when we don't switch page tables below. */ >> + ALTERNATIVE nop, sti, X86_FEATURE_NO_XPTI > > I do not think the complexity of of altering the position of sti > outweighs the fractional extra delay which would result from > unilaterally having the sti later. Furthermore, if you really are > concerned about microoptimising this, you don't want a singlebyte nop here. > >> CR4_PV32_RESTORE There is, not the least, this, which I'm afraid is adding quite a bit of a delay. While we're not real-time ready, I don't think we should needlessly delay the enabling of interrupts. As to the single byte NOP - are you suggesting it is worse than the single byte STI? >> @@ -210,6 +211,12 @@ ENTRY(cstar_enter) >> movq $0, STACK_CPUINFO_FIELD(xen_cr3)(%rbx) >> .Lcstar_cr3_okay: >> sti >> +.Lcstar_cr3_end: >> + .pushsection .altinstructions, "a", @progbits >> + altinstruction_entry .Lcstar_cr3_start, .Lcstar_cr3_start, \ >> + X86_FEATURE_NO_XPTI, \ >> + (.Lcstar_cr3_end - .Lcstar_cr3_start), 0 >> + .popsection > > It occurs to me that this would be far more legible if we had an alt_nop > wrapper. Reusing .Lcstar_cr3_start and a length of 0 isn't obvious. Could certainly do that, but one thing at a time. >> --- a/xen/arch/x86/x86_64/entry.S >> +++ b/xen/arch/x86/x86_64/entry.S >> @@ -46,7 +47,6 @@ restore_all_guest: >> movabs $DIRECTMAP_VIRT_START, %rcx >> mov %rdi, %rax >> and %rsi, %rdi >> - jz .Lrag_keep_cr3 > > This looks like a functional change? Definitely not - the conditional branch simply becomes unnecessary when the entire piece of code gets NOP-ed out. >> @@ -492,9 +519,20 @@ ENTRY(common_interrupt) >> CR4_PV32_RESTORE >> movq %rsp,%rdi >> callq do_IRQ >> +.Lintr_cr3_restore: >> mov %r15, STACK_CPUINFO_FIELD(xen_cr3)(%r14) >> +.Lintr_cr3_end: >> jmp ret_from_intr >> >> + .pushsection .altinstructions, "a", @progbits >> + altinstruction_entry .Lintr_cr3_restore, .Lintr_cr3_restore, \ >> + X86_FEATURE_NO_XPTI, \ >> + (.Lintr_cr3_end - .Lintr_cr3_restore), 0 >> + altinstruction_entry .Lintr_cr3_start, .Lintr_cr3_start, \ >> + X86_FEATURE_NO_XPTI, \ >> + (.Lintr_cr3_okay - .Lintr_cr3_start), 0 > > This is now getting very complicated to follow. Is it just for IST > safety and liable to disappear? If not, I think we need a different > way,as this is now saying "sporadic instructions inside this block, but > not all of them, turn into nops". This is not an IST path, and it is also not NOP-ing out sporadic instructions - we can't drop the first piece of code without also dropping the second, as %r14 won't be set up if the first block is gone. They're clearly framed by .Lintr_cr3_* labels - I'm not sure how to make even more obvious what's going on. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |