Isaku Yamahata wrote:
> Dong, Eddie wrote:
>> I guess we are talking in different angle which hide the real
>> issues. We
>> have multiple alternaitves:
>> 1: pv_ops
>> 2: pv_ops + binary patching to convert those indirect function call
>> to direct function call like in X86
>> 3: pure binary patching
>>
>> For community,
>> #1 need many effort like Jeremy spent in X86 side, it could last for
>> 6-12 months, #2 is based on #1, the additional effort is very small,
>> probably 2-4 weeks. #3 is not pv_ops, it may need 2-3 months effort.
>>
>> Per my understanding to previous Yamahata san's patch, it address
>> part of #3 effort. I.e. #A of #3. What I want to suggest is #2.
>
> Hmm, by "pv_ops" you mean a set of functions which are grouped, right?
> My current implementation does
> #define ia64_fc(addr) paravirt_fc(addr)
> ...
> But do you want to make them indirect call?
> i.e. something like
> #define ia64_fc(addr) pv_ops->fc(addr)
That is what X86 pv_ops did, such as following pv_ops for halt
instruction in X86.
static inline void halt(void)
{
PVOP_VCALL0(pv_irq_ops.safe_halt);
}
The key issue is current approach (putting native instruction by
default)
always need a binary patching. While pv_ops doesn't assume in that way.
In X86, only only cli/sti/iret & "sti; sysexit" is in patch site.
Xen/X86 patch cli/sti from indirect call to direct call.
Anyway patching or not is totally depend on hypervisor itself.
>
>
>> With pv_ops, all those instruction both in A/B/C are already
>> replaced by source level pv_ops code, so no binary patching is
>> needed. The only patching needed in #2 is to convert indirect
>> function call to direct function call for some hot APIs, for example
>> X86 does for cli/sti. The majority of pv_ops are not patched.
>>
>> So basically #2 & #3 approach is kind of conflict, and we probably
>> need to decide which way to go earlier.
>
> It's not difficult to make #A of #3 to #A of #2.
Yes, but the issue is pv_ops based patching is pure optional.
But this patch makes it permanent. And the 3-5 cycles saved by
patching is too small for huge C code function, which should be
addressed in later stage.
Looking at X86 code, only arch\x86\kernel/entry_32.S may be
patched, such as sysenter_entry(). All C code are not.
For IA64, it is IVT code if we take same policy with X86, i,e. #C is
critical and may need patching. (Note here: patching or not
even for #C is still optional)
> (At least for making the current implementation into #A of #2,
> but it requires more work and performance degrade.)
> However I don't see any advantage #A of #2 than #A of #3.
We don't need #A at all.
> If it is necessary to call some other function for #A of #3,
> it is possible to rewrite instructions into something like
>
> mov reg = 1f
> br <target25> (relocation is necessary)
> 1:
>
> So left issues are how many instructions (or bundles) should be
> reserved for each operations and what is their calling convention.
> Although currently I put instructions for native as default case,
> you can put the above sequence if you desire.
The issue is we don;t have clobber registers here, that is
what I say pv_ops for ASM is key challenge, and we need to change IVT
code
a lot to get clobber registers. That is why adding pv_ops
support is a big challenge, but patching or not is not that difficult.
Even we
want to patch it, we need to get pv_ops code done and than do
optimization.
For those C based codewe can always use scratch registers.
> Given that #A of #2 is for performance critical path, so that
> not using usual stacked calling convension would be acceptable.
> As you already proposed, PAL static calling convention is a candidate.
Not at all. For #A, it is already in C code and thus memory is
available,
C calling convention can be applied seamlessly.
PAL like convention is only for IVT code.
>
> However I don't see any advantage to switch from the current
> convention (using r8, r9...) for #A at this moment.
I don't oppose here either. I just leave the question here and let
Linux guys to decide.
In neutral, I won't let Xen specific implementation impacts pv_ops
design since the later one is hypervisor neutral. But I can accept
either.
> It is necessary to discuss with linux-ia64 people to see if it's
> acceptable or not. If we found it necessary to change the convention,
> it wouldn't be so difficult to do so. But it should be after
> discussion with linux-ia64. Not now.
>
>
Actually I didn't opose binary patching, but my point is that we can't
assume patching is a must for each hypervisor. Leaving the code
to native by default will enforce this assumption.
Also I think we should get pv_ops done first and then
do optimization (patching), reverse sequence will just make more effort
for whole community. Once we get pv_ops done, the framework used
in this patch can be extended to that code base and we can
decide which one need patching.
Per my understanding to this patch, I think the 90% effort is forward
porting Xen
from 2.6.18 to latest Linux code. Binary patching part is just one small
portion.
For those forward porting effort, we always need it even if we decide to
go with pv_ops:)
Really a big progress and many thanks from me :)
Eddie
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|