WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [patch 06/14] Kexec: Save the MADT ACPI tables so t

To: Alex Williamson <alex.williamson@xxxxxx>
Subject: Re: [Xen-ia64-devel] [patch 06/14] Kexec: Save the MADT ACPI tables so that they can be restored
From: Horms <horms@xxxxxxxxxxxx>
Date: Thu, 20 Sep 2007 12:24:33 +0900
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 19 Sep 2007 20:25:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1189621284.6784.42.camel@lappy>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20070912080845.674923870@xxxxxxxxxxxx> <20070912082602.297052168@xxxxxxxxxxxx> <1189621284.6784.42.camel@lappy>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: mutt-ng/devel-r804 (Debian)
On Wed, Sep 12, 2007 at 12:21:24PM -0600, Alex Williamson wrote:
> On Wed, 2007-09-12 at 17:08 +0900, Simon Horman wrote:
> > plain text document attachment (ia64-xen-kexec-save-acpi.patch)
> > Xen mangles the MADT tables on boot up. But the pristine tables are needed
> > on kexec. So save the tables and restore them on kexec.
> > 
> > Note that this saves all the tables. A trimmed down save could
> > be done if prefered.
> 
>    This is touching common code, so probably ought to be approved
> through xen-devel.  It seems reasonable though.  Why doesn't x86 need
> something like this?  Thanks,

I looked through this issue again and basically the problem is
that due to the virtualisation of lid (lsapic->id and lsapic->eid)
that occurs in acpi_update_lsapic(), the seccond kernel is
unable to bring up the AP on the HP 2620.

wakeup_secondary_cpu() sends the wakeup using ia64_send_ipi(), which
uses cpu_physical_id() to detiemine the destination. This call is backed
by ia64_get_lid() which uses lsapic->id and lsapic->eid.

So in a nutshell, on the box in question, the second kernel ends up
trying to wake up CPU 0x100, where it should be trying CPU 0x200.
On my Tiger2 this doesn't manifest, because 0x200 happens to
be the correct physical ID.

The approach that I implemented aleviates this problem by restoring the
ACPI table before going into purgatory and in turn booting the second
kernel. Its a little heavy handed and could be trimmed - though that
would make little difference to the ammount of code, and the ammount of
memory used should be trivial in any case.

Although the patch is in common code, because the mangling is in
common code. Its actually only relevant to IA64 because only
IA64's mangling callbacks cause the problem described above.
I guess I could move the code into IA64 specific files, though
the common code would probably still need to be taught to call the
save and restore rotuines.

My question is, what is the purpose of the virtualisation of lid.  I
tried removing the mangling of id and eid and was successfully able to
boot dom0, though clearly this isn't a comprehensive test.  If the
mangling is neccessary, could it be achived in a different way, perhaps
by modifying ia64_get_lid() - though perhaps that wouldn't work on all
platforms ?

-- 
Horms
  H: http://www.vergenet.net/~horms/
  W: http://www.valinux.co.jp/en/


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>