[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH] to fix ACPI slit table access at runtime



> Hi Keir,
>   I was noticing that the current Xen code was not able to access the ACPI
> SLIT data (node to node distance) at runtime. And I found root cause of it
> being the acpi_slit pointer not being valid at runtime.
> I have fixed the issue by saving the slit table data at boot time, and using
> the saved data for runtime access as follows.

Better would be to dynamically alloc_boot_pages(). Then also the patch would
be about two or three lines. The only disadvantage would be that then SLIT
parsing would be for 64-bit hypervisor only (since 32-bit Xen does not have
mappings for bootmem). I think we can live with that - I note that Linux
only does SLIT parsing for x86_64 too.

Keir,
  If I understand you correctly, you would prefer to have x86_64 bit only 
solution, with dynamic allocation. I will change the patch accordingly and 
resend it.

Thanks & Regards,
Nitin


 -- Keir

> Please accept or comment.
> Thanks & Regards,
> Nitin
>  
> Signed-Off-By: Nitin A Kamble <nitin.a.kamble@xxxxxxxxx>
>  
> diff -r b474725a242b xen/arch/x86/srat.c
> --- a/xen/arch/x86/srat.c             Thu Feb 25 07:50:38 2010 -0800
> +++ b/xen/arch/x86/srat.c          Thu Feb 25 08:02:36 2010 -0800
> @@ -20,13 +20,15 @@
>  #include <asm/e820.h>
>  #include <asm/page.h>
>  
> -static struct acpi_table_slit *__read_mostly acpi_slit;
> -
>  static nodemask_t nodes_parsed __initdata;
>  static nodemask_t nodes_found __initdata;
>  static struct node nodes[MAX_NUMNODES] __initdata;
>  static u8 __read_mostly pxm2node[256] = { [0 ... 255] = 0xff };
>  
> +static struct {
> +             struct acpi_table_slit slit_table;
> +             u8 entries[MAX_NUMNODES * MAX_NUMNODES];
> +} acpi_slit;
>  
>  static int num_node_memblks;
>  static struct node node_memblk_range[NR_NODE_MEMBLKS];
> @@ -144,7 +146,8 @@
>                                 printk(KERN_INFO "ACPI: SLIT table looks
> invalid. Not used.\n");
>                                 return;
>                 }
> -              acpi_slit = slit;
> +
> +             memcpy(&acpi_slit, slit, slit->header.length);
>  }
>  
>  /* Callback for Proximity Domain -> LAPIC mapping */
> @@ -424,10 +427,10 @@
>  {
>                 int index;
>  
> -              if (!acpi_slit)
> +             if (!acpi_slit.slit_table.header.length)
>                                 return a == b ? 10 : 20;
> -              index = acpi_slit->locality_count * node_to_pxm(a);
> -              return acpi_slit->entry[index + node_to_pxm(b)];
> +             index = acpi_slit.slit_table.locality_count * node_to_pxm(a);
> +             return acpi_slit.slit_table.entry[index + node_to_pxm(b)];
>  }
>  
>  EXPORT_SYMBOL(__node_distance);
> diff -r b474725a242b xen/include/acpi/actbl1.h
> --- a/xen/include/acpi/actbl1.h Thu Feb 25 07:50:38 2010 -0800
> +++ b/xen/include/acpi/actbl1.h              Thu Feb 25 08:02:36 2010 -0800
> @@ -573,7 +573,7 @@
>  struct acpi_table_slit {
>                 struct acpi_table_header header;            /* Common ACPI
> table header */
>                 u64 locality_count;
> -              u8 entry[1];                        /* Real size = localities^2
> */
> +             u8 entry[0];                        /* Real size = localities^2
> */
>  };
>  
>  
> /*****************************************************************************
> **
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.