[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 7/7] x86/build: Use new .nop directive when available



On Thu, Mar 01, 2018 at 12:28:27AM -0700, Jan Beulich wrote:
> >>> Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 02/28/18 7:20 PM >>>
> >On 28/02/18 16:22, Jan Beulich wrote:
> >>>>> On 26.02.18 at 12:35, <andrew.cooper3@xxxxxxxxxx> wrote:
> >>> --- a/xen/include/asm-x86/alternative-asm.h
> >>> +++ b/xen/include/asm-x86/alternative-asm.h
> >>> @@ -1,6 +1,8 @@
> >>>  #ifndef _ASM_X86_ALTERNATIVE_ASM_H_
> >>>  #define _ASM_X86_ALTERNATIVE_ASM_H_
> >>>  
> >>> +#include <asm/nops.h>
> >>> +
> >>>  #ifdef __ASSEMBLY__
> >>>  
> >>>  /*
> >>> @@ -18,6 +20,14 @@
> >>>      .byte \pad_len
> >>>  .endm
> >>>  
> >>> +.macro mknops nr_bytes
> >>> +#ifdef HAVE_AS_NOP_DIRECTIVE
> >>> +    .nop \nr_bytes, ASM_NOP_MAX
> >> And do you really need to specify ASM_NOP_MAX here? What's
> >> wrong with letting the assembler pick what it wants as the longest
> >> NOP?
> >
> >I don't want a toolchain change to cause us to go beyond 11 byte nops,
> >because of the associated decode stall on almost all hardware.  Using
> >ASM_NOP_MAX seemed like the easiest way to keep the end result
> >consistent, irrespective of toolchain support.
> 
> I don't understand - an earlier patch takes care of runtime replacing them
> anyway. What stalls can then result?

The runtime replacement won't happen when using the .nops directive
AFAICT, because the original padding section will likely be filled
with opcodes different than 0x90, and thus the runtime nop
optimization won't be performed.

I also agree that using the default (not proving a second argument)
seems like a better solution. Why would the toolstack switch to
something that leads to worse performance? That would certainly be
considered a bug.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.