On 13/10/2011 08:21, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> The IDTs being exactly a page in size, using xmalloc() here is rather
> inefficient, as this requires double the amount to be allocated (with
> almost an entire page wasted). For hot plugged CPUs, this at once
> eliminates one more non-order-zero runtime allocation.
>
> For x86-32, however, the IDT is exactly half a page, so allocating a
> full page seems wasteful here, so it continues to use xmalloc() as
> before.
>
> With most of the affected functions' bodies now being inside #ifdef-s,
> it might be reasonable to split those parts out into subarch-specific
> code...
Well, there is also opportunity for code merging. There is generally no
reason for x86-64 to use alloc_domheap_pages where x86-32 uses
alloc_xenheap_pages (e.g., for allocating per_cpu(gdt_table)) -- both
architectures can use alloc_xenheap_pages in this case.
Also we might get rid of some further ifdef'ery if we added an alloc/free
interface where *both* functions take a size parameter. We could then decide
whether to use xmalloc or alloc_xenheap_pages dynamically based on that
parameter. Knowing size on free would allow us to easily free such
allocations too.
Finally, I'm not sure about more x86-64 code movement: I'd like to kill
x86-32 entirely really.
Anyway, despite all this...
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Acked-by: Keir Fraser <keir@xxxxxxx>
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -639,9 +639,6 @@ static void cpu_smpboot_free(unsigned in
> {
> unsigned int order;
>
> - xfree(idt_tables[cpu]);
> - idt_tables[cpu] = NULL;
> -
> order = get_order_from_pages(NR_RESERVED_GDT_PAGES);
> #ifdef __x86_64__
> if ( per_cpu(compat_gdt_table, cpu) )
> @@ -650,10 +647,15 @@ static void cpu_smpboot_free(unsigned in
> free_domheap_pages(virt_to_page(per_cpu(compat_gdt_table, cpu)),
> order);
> per_cpu(compat_gdt_table, cpu) = NULL;
> + order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
> + if ( idt_tables[cpu] )
> + free_domheap_pages(virt_to_page(idt_tables[cpu]), order);
> #else
> free_xenheap_pages(per_cpu(gdt_table, cpu), order);
> + xfree(idt_tables[cpu]);
> #endif
> per_cpu(gdt_table, cpu) = NULL;
> + idt_tables[cpu] = NULL;
>
> if ( stack_base[cpu] != NULL )
> {
> @@ -691,19 +693,25 @@ static int cpu_smpboot_alloc(unsigned in
> if ( !page )
> goto oom;
> per_cpu(gdt_table, cpu) = gdt = page_to_virt(page);
> + order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
> + page = alloc_domheap_pages(NULL, order,
> + MEMF_node(cpu_to_node(cpu)));
> + if ( !page )
> + goto oom;
> + idt_tables[cpu] = page_to_virt(page);
> #else
> per_cpu(gdt_table, cpu) = gdt = alloc_xenheap_pages(order, 0);
> if ( !gdt )
> goto oom;
> + idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES);
> + if ( idt_tables[cpu] == NULL )
> + goto oom;
> #endif
> memcpy(gdt, boot_cpu_gdt_table,
> NR_RESERVED_GDT_PAGES * PAGE_SIZE);
> BUILD_BUG_ON(NR_CPUS > 0x10000);
> gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
>
> - idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES);
> - if ( idt_tables[cpu] == NULL )
> - goto oom;
> memcpy(idt_tables[cpu], idt_table,
> IDT_ENTRIES*sizeof(idt_entry_t));
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|