[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/3] xen/x86: split boot trampoline into permanent and temporary part



On 23/03/17 16:04, Jan Beulich wrote:
>>>> On 23.03.17 at 07:25, <jgross@xxxxxxxx> wrote:
>> @@ -131,6 +151,14 @@ start64:
>>          movabs  $__high_start,%rax
>>          jmpq    *%rax
>>  
>> +#include "wakeup.S"
>> +
>> +/* The first page of trampoline is permanent, the rest boot-time only. */
>> +        .equ    trampoline_boot_start, trampoline_start + PAGE_SIZE
>> +        .global trampoline_boot_start
> 
> The name is at least ambiguous - boot only code starts right here,
> not at the next page boundary. Would it not work to use wakeup_stack
> here, ...
> 
>> --- a/xen/arch/x86/boot/wakeup.S
>> +++ b/xen/arch/x86/boot/wakeup.S
>> @@ -1,6 +1,7 @@
>>          .code16
>>  
>>  #define wakesym(sym) (sym - wakeup_start)
>> +#define wakeup_stack trampoline_start + PAGE_SIZE
> 
> ... omit this #define, and ...
> 
>> --- a/xen/arch/x86/xen.lds.S
>> +++ b/xen/arch/x86/xen.lds.S
>> @@ -335,3 +335,5 @@ ASSERT(IS_ALIGNED(__bss_end,        8), "__bss_end 
>> misaligned")
>>  
>>  ASSERT((trampoline_end - trampoline_start) < TRAMPOLINE_SPACE - 
>> MBI_SPACE_MIN,
>>      "not enough room for trampoline and mbi data")
>> +ASSERT((trampoline_boot_start - wakeup_stack_start) >= WAKEUP_STACK_MIN,
>> +    "wakeup stack too small")
> 
> ... use wakeup_stack here too?

It would work, yes. With my pending patch for releasing the memory of
the boot-only trampoline code this would look a little bit strange:
I'd have to free the memory named "wakeup_stack" and the label
"trampoline_boot_start" would be unused.

In case you'd prefer it I can just place trampoline_boot_start at the
same location as wakeup_stack_start and free the boot trampoline
memory by using the next page boundary.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.