WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH][retry 2][2/2] new platform hypervisor call to ge

To: Mark Langsdorf <mark.langsdorf@xxxxxxx>
Subject: [Xen-devel] Re: [PATCH][retry 2][2/2] new platform hypervisor call to get APICIDs
From: Chris Lalancette <clalance@xxxxxxxxxx>
Date: Mon, 03 Mar 2008 17:40:59 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 03 Mar 2008 14:41:27 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200803031540.28151.mark.langsdorf@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <200803031540.28151.mark.langsdorf@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.12 (X11/20080226)
Mark Langsdorf wrote:
> Some AMD machines have APIC IDs that not equal to CPU IDs.  In
> the default Xen configuration, ACPI calls on these machines
> can get confused.  This shows up most noticeably when running
> AMD PowerNow!.  The only solution is for dom0 to get the
> hypervisor's cpuid to apicid table when needed (ie, when dom0
> vcpus are pinned).
> 
> Make dom0 call a new platform hypercall that returns the
> hypervisor's cpuid to apicid table.  The decision logic
> (dom0_vcpus_pinned) is currently hard-coded but should be
> passed from the hypervisor.  Keir wrote that he would take
> care of that when he suggested this solution.
> 
> I have tested this on my 4p/16 core machine and it works.  I
> would appreciate testing on other boxes.

This patch is much better, although unfortunately the dom0_vcpus_pinned change
doesn't look like it will work as-is.  That is, the only failure case I see on
the hypervisor side is if you fail copy_to_guest, so that means on the dom0 side
the only way you think the dom0 vcpus are unpinned is if you have a major
failure.    It seems you really need to worry about: a)  making the platform op
call on an HV that doesn't support it, b) making the platform op call, and
somehow returning that dom0 is unpinned, c) making the platform op call and
returning that the dom0 is in fact pinned.

Generally, I'm not against the way you've done it here, but originally I thought
you would re-enable the code in dom0 mpparse-xen.c and then just have a
hypercall to determine whether dom0 had the vcpus pinned or not.  The advantage
to that way is that if there is ever any other information needed from the MP
tables, we will just have it available in the dom0.  If we don't think it is
likely we will need additional information from the MP tables, then it is sort
of a wash which way you do it.

Chris Lalancette

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel