WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[Xen-ia64-devel] RE: vcpu context merge

To: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Subject: [Xen-ia64-devel] RE: vcpu context merge
From: "Magenheimer, Dan (HP Labs Fort Collins)" <dan.magenheimer@xxxxxx>
Date: Wed, 25 May 2005 09:44:19 -0700
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 25 May 2005 16:43:41 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVbe1RkD/iHinBzT3SB7w1CGdFiMQBEFR/wACTyKtAAEZ7HwAB3g3MgAIEYpkA=
Thread-topic: vcpu context merge
Sorry to take awhile to reply... this message got buried when
I went looking for the answer at the end and I couldn't find
it at first.

> > Map the shared page at THIS virtual address.  (If an illegal virtual
> > address is passed, Xen can kill the domain.)
> So you mean you are using hypercall to do this kind of 
> special map instead of TR map, right?
> Yes, this merge will not change this way, and if using 
> hypercall to map this virtual address is 
> what you are prefered, that will be great as we are proposing 
> a virtual TR attribute in addtion 
> to TC and TR for guest. So no change at this point or you 
> will see a extensive soliution for this.

This particular "virtual TR" is critical so might warrant a
separate hypercall.  I need to ensure that the shared page is
pinned (in a physical TR) for performance in the guest
and because I access it with psr.ic off in Xen itself.  (The
physical TR and virtual address could be different but that
seems like a waste of precious TRs.)

> >> I know it introduce additional effort to do this in PV, kevin
> >> and I can help together
> >> to make that happen if you need :)
> > 
> > If all the virtual registers are a fixed offset from THIS
> > virtual address (see xen-ia64.bkbits.net/xenlinux-ia64-2.6.11.bk
> > in include/asm-ia64/xen/processsor.h), then only the
> > offset constants need to change.  If you can provide me
> > those constants for the new shared page, that would be
> > very helpful.
> Sure, I would like to suggest to generate this offset 
> automatically like asm-offsets.c
>  did now for XEN, what is your opnion?

Yes, asm-offsets is the right way to do it.  As I am working
toward transparent paravirtualization (with minimum impact
to Linux/ia64), I'd like to avoid using the Linux/ia64 mechanism
directly, but it may do for now.

> BTW, probably a single line change is needed to select the 
> base address of shared VPD instead of traditional shared 
> page, as now XENO linux have 2 seperate shared pages, one for 
> traditional shared page inf, and another one for shared VPD.
> > 
> > Also, I noticed in some of the ctrl_if(?) code, some data
> > structure is assumed to be at a fixed offset (1024) from
> > the shared page.  Is this accounted for in your merged
> > data structure?
> I didn't understand well on this point, can u say more for ctrl_if?
> How is it considered now? Will the merge change the way it works now?

Found it.  See #define get_ctrl_if() in ctrl_if.c (2048 not 1024,
which is why I couldn't find it).

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>