WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 2/4] [HVM] introduce CPU affinity for allocate_ph

To: Christoph Egger <Christoph.Egger@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 2/4] [HVM] introduce CPU affinity for allocate_physmap call
From: Keir Fraser <keir@xxxxxxxxxxxxx>
Date: Mon, 13 Aug 2007 15:06:39 +0100
Cc: Andre Przywara <andre.przywara@xxxxxxx>
Delivery-date: Mon, 13 Aug 2007 07:07:17 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200708131459.31305.Christoph.Egger@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcfdsyrIaWhudEmmEdymlgAX8io7RQ==
Thread-topic: [Xen-devel] [PATCH 2/4] [HVM] introduce CPU affinity for allocate_physmap call
User-agent: Microsoft-Entourage/11.3.3.061214
On 13/8/07 13:59, "Christoph Egger" <Christoph.Egger@xxxxxxx> wrote:

>> We cannot change the size of existing hypercall structures.
> 
> Except Xen bumps major version number to 4 ? :-)
> 
> You are worrying about PV guests that lag behind with syncing
> pulic headers such as NetBSD/Xen ?

It's not merely an API issue, it's an ABI compatibility issue. Existing
guests will provide structures that are too small (and thus have trailing
garbage, or potentially even cross over into an unmapped page causing
copy_from_guest() to fail). Also this particular structure is included
inside others (like struct xen_memory_exchange) and will change all the
field offsets. Not good.

> Making struct xen_machphys_mapping NUMA-aware is also a no-go, right?
> It would additionally need a min_mfn and a vnodeid member.

Actually I think it can stay as is. Guests are supposed to be robust against
unmapped holes in the m2p table. So we can continue to have one big virtual
address range covering all valid MFNs. This is only going to fail if virtual
address space is scarce compared with machine address space (e.g., we kind
of run up against this in a mild way with x86 PAE).

> Oh, and how should the guest query how many vnode's exist?

I think we should add topology discovery hypercalls. Xen needs to know this
stuff anyway, so we just provide a mechanism for guests to extract it. An
alternative is to start exporting virtual ACPI tables to PV guests.

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel