WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] x86-64: don't use xmalloc_array() for allocation

To: Jan Beulich <JBeulich@xxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] x86-64: don't use xmalloc_array() for allocation of the (per-CPU) IDTs
From: Keir Fraser <keir.xen@xxxxxxxxx>
Date: Thu, 13 Oct 2011 15:44:30 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 13 Oct 2011 07:45:29 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; bh=nUROs2NbsHdY3MGa6aimEcRLzjWPCXBkmsqeyTOvk00=; b=XP5saKbt6YLl0J3wF4gPjx1M4Cfa6rXYgfH6n5Im1Ry/cqDPMZNNLJzTWWVZkbYLtF 7xw07I4AG//Sf/dJ7G2Mtqxn58L8V1j6gxdKU9UNYGvGWzxvLni2umLHReRucNB7JLOK CzgPlEGkqYlS+7V0Vl/Ji1U/k/Yul30DhPeZo=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E96F23B020000780005B28A@xxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcyJtp0eGTayRLp0+kimkS00cn4PRQ==
Thread-topic: [Xen-devel] [PATCH] x86-64: don't use xmalloc_array() for allocation of the (per-CPU) IDTs
User-agent: Microsoft-Entourage/12.30.0.110427
On 13/10/2011 13:14, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:

>>>> On 13.10.11 at 11:34, Keir Fraser <keir@xxxxxxx> wrote:
>> Given our antipathy to the x86-32 hypervisor, and the fact that any
>> remaining users of it are unlikely to be running MP systems at all let alone
>> large MP systems, how about this cleanup patch?... (It looks quite confusing
>> as a patch, actually, but does the obvious thing).
> 
> Looks good to me - I was actually considering to convert the x86-64
> code back to alloc_xenheap_pages() too (for we'll need to do that
> eventually anyway when we want to support more than 5Tb of memory)
> when I put together that earlier patch, but then refrained from doing so
> to keep the patch size down.

You mean there's a 5TB limit for alloc_domheap_pages() allocations??

I only switched to alloc_xenheap_pages() because it's safe for x86-32 too...

 -- Keir

> Jan
> 
>> x86: Simplify smpboot_alloc by merging x86-{32,64} code as far as possible.
>> 
>> We still need one ifdef, as x86-32 does not have a compat_gdt_table.
>> 
>> On x86-32 there is 1/2-page wastage due to allocating a whole page for
>> the per-CPU IDT, however we expect very few users of the x86-32
>> hypervisor. Those that cannot move to the 64-bit hypervisor are likely
>> using old single-processor systems or new single-procesor netbooks. On
>> UP and small MP systems, the wastage is insignificant.
>> 
>> Signed-off-by: Keir Fraser <keir@xxxxxxx>
>> diff -r 1515484353c6 xen/arch/x86/smpboot.c
>> --- a/xen/arch/x86/smpboot.c    Thu Oct 13 10:09:28 2011 +0200
>> +++ b/xen/arch/x86/smpboot.c    Thu Oct 13 10:25:01 2011 +0100
>> @@ -640,21 +640,16 @@ static void cpu_smpboot_free(unsigned in
>>      unsigned int order;
>>  
>>      order = get_order_from_pages(NR_RESERVED_GDT_PAGES);
>> +    free_xenheap_pages(per_cpu(gdt_table, cpu), order);
>> +    per_cpu(gdt_table, cpu) = NULL;
>> +
>>  #ifdef __x86_64__
>> -    if ( per_cpu(compat_gdt_table, cpu) )
>> -        free_domheap_pages(virt_to_page(per_cpu(gdt_table, cpu)), order);
>> -    if ( per_cpu(gdt_table, cpu) )
>> -        free_domheap_pages(virt_to_page(per_cpu(compat_gdt_table, cpu)),
>> -                           order);
>> +    free_xenheap_pages(per_cpu(compat_gdt_table, cpu), order);
>>      per_cpu(compat_gdt_table, cpu) = NULL;
>> -    order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
>> -    if ( idt_tables[cpu] )
>> -        free_domheap_pages(virt_to_page(idt_tables[cpu]), order);
>> -#else
>> -    free_xenheap_pages(per_cpu(gdt_table, cpu), order);
>> -    xfree(idt_tables[cpu]);
>>  #endif
>> -    per_cpu(gdt_table, cpu) = NULL;
>> +
>> +    order = get_order_from_bytes(IDT_ENTRIES * sizeof(idt_entry_t));
>> +    free_xenheap_pages(idt_tables[cpu], order);
>>      idt_tables[cpu] = NULL;
>>  
>>      if ( stack_base[cpu] != NULL )
>> @@ -669,9 +664,6 @@ static int cpu_smpboot_alloc(unsigned in
>>  {
>>      unsigned int order;
>>      struct desc_struct *gdt;
>> -#ifdef __x86_64__
>> -    struct page_info *page;
>> -#endif
>>  
>>      stack_base[cpu] = alloc_xenheap_pages(STACK_ORDER, 0);
>>      if ( stack_base[cpu] == NULL )
>> @@ -679,41 +671,28 @@ static int cpu_smpboot_alloc(unsigned in
>>      memguard_guard_stack(stack_base[cpu]);
>>  
>>      order = get_order_from_pages(NR_RESERVED_GDT_PAGES);
>> -#ifdef __x86_64__
>> -    page = alloc_domheap_pages(NULL, order,
>> -                               MEMF_node(cpu_to_node(cpu)));
>> -    if ( !page )
>> +    per_cpu(gdt_table, cpu) = gdt =
>> +        alloc_xenheap_pages(order, MEMF_node(cpu_to_node(cpu)));
>> +    if ( gdt == NULL )
>>          goto oom;
>> -    per_cpu(compat_gdt_table, cpu) = gdt = page_to_virt(page);
>> -    memcpy(gdt, boot_cpu_compat_gdt_table,
>> -           NR_RESERVED_GDT_PAGES * PAGE_SIZE);
>> -    gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
>> -    page = alloc_domheap_pages(NULL, order,
>> -                               MEMF_node(cpu_to_node(cpu)));
>> -    if ( !page )
>> -        goto oom;
>> -    per_cpu(gdt_table, cpu) = gdt = page_to_virt(page);
>> -    order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
>> -    page = alloc_domheap_pages(NULL, order,
>> -                               MEMF_node(cpu_to_node(cpu)));
>> -    if ( !page )
>> -        goto oom;
>> -    idt_tables[cpu] = page_to_virt(page);
>> -#else
>> -    per_cpu(gdt_table, cpu) = gdt = alloc_xenheap_pages(order, 0);
>> -    if ( !gdt )
>> -        goto oom;
>> -    idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES);
>> -    if ( idt_tables[cpu] == NULL )
>> -        goto oom;
>> -#endif
>> -    memcpy(gdt, boot_cpu_gdt_table,
>> -           NR_RESERVED_GDT_PAGES * PAGE_SIZE);
>> +    memcpy(gdt, boot_cpu_gdt_table, NR_RESERVED_GDT_PAGES * PAGE_SIZE);
>>      BUILD_BUG_ON(NR_CPUS > 0x10000);
>>      gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
>>  
>> -    memcpy(idt_tables[cpu], idt_table,
>> -           IDT_ENTRIES*sizeof(idt_entry_t));
>> +#ifdef __x86_64__
>> +    per_cpu(compat_gdt_table, cpu) = gdt =
>> +        alloc_xenheap_pages(order, MEMF_node(cpu_to_node(cpu)));
>> +    if ( gdt == NULL )
>> +        goto oom;
>> +    memcpy(gdt, boot_cpu_compat_gdt_table, NR_RESERVED_GDT_PAGES *
>> PAGE_SIZE);
>> +    gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
>> +#endif
>> +
>> +    order = get_order_from_bytes(IDT_ENTRIES * sizeof(idt_entry_t));
>> +    idt_tables[cpu] = alloc_xenheap_pages(order,
>> MEMF_node(cpu_to_node(cpu)));
>> +    if ( idt_tables[cpu] == NULL )
>> +        goto oom;
>> +    memcpy(idt_tables[cpu], idt_table, IDT_ENTRIES * sizeof(idt_entry_t));
>>  
>>      return 0;
>>  
>> 
>>> Anyway, despite all this...
>>> 
>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>> 
>>> Acked-by: Keir Fraser <keir@xxxxxxx>
>>> 
>>>> --- a/xen/arch/x86/smpboot.c
>>>> +++ b/xen/arch/x86/smpboot.c
>>>> @@ -639,9 +639,6 @@ static void cpu_smpboot_free(unsigned in
>>>>  {
>>>>      unsigned int order;
>>>>  
>>>> -    xfree(idt_tables[cpu]);
>>>> -    idt_tables[cpu] = NULL;
>>>> -
>>>>      order = get_order_from_pages(NR_RESERVED_GDT_PAGES);
>>>>  #ifdef __x86_64__
>>>>      if ( per_cpu(compat_gdt_table, cpu) )
>>>> @@ -650,10 +647,15 @@ static void cpu_smpboot_free(unsigned in
>>>>          free_domheap_pages(virt_to_page(per_cpu(compat_gdt_table, cpu)),
>>>>                             order);
>>>>      per_cpu(compat_gdt_table, cpu) = NULL;
>>>> +    order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
>>>> +    if ( idt_tables[cpu] )
>>>> +        free_domheap_pages(virt_to_page(idt_tables[cpu]), order);
>>>>  #else
>>>>      free_xenheap_pages(per_cpu(gdt_table, cpu), order);
>>>> +    xfree(idt_tables[cpu]);
>>>>  #endif
>>>>      per_cpu(gdt_table, cpu) = NULL;
>>>> +    idt_tables[cpu] = NULL;
>>>>  
>>>>      if ( stack_base[cpu] != NULL )
>>>>      {
>>>> @@ -691,19 +693,25 @@ static int cpu_smpboot_alloc(unsigned in
>>>>      if ( !page )
>>>>          goto oom;
>>>>      per_cpu(gdt_table, cpu) = gdt = page_to_virt(page);
>>>> +    order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
>>>> +    page = alloc_domheap_pages(NULL, order,
>>>> +                               MEMF_node(cpu_to_node(cpu)));
>>>> +    if ( !page )
>>>> +        goto oom;
>>>> +    idt_tables[cpu] = page_to_virt(page);
>>>>  #else
>>>>      per_cpu(gdt_table, cpu) = gdt = alloc_xenheap_pages(order, 0);
>>>>      if ( !gdt )
>>>>          goto oom;
>>>> +    idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES);
>>>> +    if ( idt_tables[cpu] == NULL )
>>>> +        goto oom;
>>>>  #endif
>>>>      memcpy(gdt, boot_cpu_gdt_table,
>>>>             NR_RESERVED_GDT_PAGES * PAGE_SIZE);
>>>>      BUILD_BUG_ON(NR_CPUS > 0x10000);
>>>>      gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
>>>>  
>>>> -    idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES);
>>>> -    if ( idt_tables[cpu] == NULL )
>>>> -        goto oom;
>>>>      memcpy(idt_tables[cpu], idt_table,
>>>>             IDT_ENTRIES*sizeof(idt_entry_t));
>>>>  
>>>> 
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
>>> 
>>> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel