This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-ia64-devel] Re: [Xen-devel] XenLinux/IA64 domU forwardport

To: "Alex Williamson" <alex.williamson@xxxxxx>
Subject: RE: [Xen-ia64-devel] Re: [Xen-devel] XenLinux/IA64 domU forwardport
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Fri, 15 Feb 2008 09:02:27 +0800
Cc: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>, xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 14 Feb 2008 17:02:44 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1203029485.6367.65.camel@lappy>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20080214070957.GA8464%yamahata@xxxxxxxxxxxxx> <10EA09EFD8728347A513008B6B0DA77A02C430E5@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20080214100134.GB8464%yamahata@xxxxxxxxxxxxx> <10EA09EFD8728347A513008B6B0DA77A02C431D7@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <1203029485.6367.65.camel@lappy>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AchvXFTSgoBHlyG3RDOJ2njLcj818AADtDDg
Thread-topic: [Xen-ia64-devel] Re: [Xen-devel] XenLinux/IA64 domU forwardport
Alex Williamson wrote:
> On Fri, 2008-02-15 at 00:43 +0800, Dong, Eddie wrote:
>> I agree with your catagory, but I think #C is the 1st challenge  we
>> need to address for now. #A could be a future task for performance
>>  later after pv_ops functionality is completed. I don't worry about
>>  those several cycles difference in the primitive ops right now,
>>  since we already spend 500-1000 cycles to enter the C code.
>    IMHO, #A and #C are both blockers for getting into upstream
> Linux/ia64.  Upstream isn't going to accept a performance hit for a
> paravirt enabled kernel on bare metal, so I'm not sure we should
> prioritize one over the other, especially since Isaku has already made
> such good progress on #A.

I guess we are talking in different angle which hide the real issues. We
have multiple alternaitves:
1:  pv_ops
2: pv_ops + binary patching to convert those indirect function call to
direct function call like in X86
3: pure binary patching

For community, 
#1 need many effort like Jeremy spent in X86 side, it could last for
6-12 months,
#2 is based on #1, the additional effort is very small, probably 2-4
#3 is not pv_ops, it may need 2-3 months effort.

Per my understanding to previous Yamahata san's patch, it address part
of #3
effort. I.e. #A of #3.

What I want to suggest is #2.

With pv_ops, all those instruction both in A/B/C are already replaced by
source level pv_ops code, so no binary patching is needed. The only
needed in #2 is to convert indirect function call to direct function
call for 
some hot APIs, for example X86 does for cli/sti. The majority of pv_ops
are not

So basically #2 & #3 approach is kind of conflict, and we probably need
to decide
which way to go earlier. 

For #1 effort, adopting pv_ops in IVT code is one of the major effort,
i.e. item #C 
in previous email.

Current progress in #3 won't be wasted, it simplifies debug effort of
#2, since it
 got new kernel works:)

>> The major challenge to #C is listed in my previous thread, it is not
>> an easy thing to address for now, especially if we need to change
>> original IVT code a lot.
>    The question of how to handle the IVT needs to be decided on
> Linux-ia64.  There are a couple approaches we could take, but it
> really comes down to what Tony and the other developers feel is
> cleanest and most maintainable.

100% agree! I will start a session there soon. 

>    I think we actually have similar issues with the C code in
> sba_iommu and swiotlb.  We have paravirtualized versions of these,
> but they're very Xen specific.  I think we'll need to abstract the
> interfaces more to make the inline paravirtualiztion acceptable.
>> Another big challenge is machine vector. I would like to create a
>> seperate thread to discuss it some time later. Basically it has
>> something overlap with pv_ops.
>    We might extend the machine vector to include some PV features, but
> at the moment, they seem somewhat orthogonal to me.  The current xen
> machine vector helps to simplify things for a unprivileged guest, but


> dom0 will need to use the appropriate bare metal machine vector while
> still making use of pv_ops.  So we somehow need to incorporate pv_ops

Yes, since dom0 have to see same platform with bare metal, we need 
those much low level pv_ops beneath machine vector to make dom0 works
on different platforms in future such as SGI platform. 

For unprivileged guest, we can keep xen machine vector, or purely rely
pv_ops, for example we present domU  a native like dig machine vector
pv_ops beneath, we can see if this can simplify the upstream changes.

My position to machine vector for now is to leave as it is, we can
revisit in
later stage.

> into all the machine vectors.  Thanks,
>       Alex

thx, eddie

Xen-ia64-devel mailing list