WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-devel][RFC]degradation on IPF due to hypercall set irq

To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
Subject: RE: [Xen-devel][RFC]degradation on IPF due to hypercall set irq
From: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Date: Wed, 22 Nov 2006 15:43:14 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 21 Nov 2006 23:43:26 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AccN5wYmUmir46SdT8a+ptVHeNZ8fgAHbs2mAACOJaA=
Thread-topic: [Xen-devel][RFC]degradation on IPF due to hypercall set irq
Keir Fraser write on 2006年11月22日 15:07:
> On 22/11/06 3:33 am, "Xu, Anthony" <anthony.xu@xxxxxxxxx> wrote:
> 
>> After moving from sharing PIC to hypercall set irq.
>> KB on UP VTI domain incurs > 10% degradation.
>> 
>> The root cause is hypercall is very expensive on IPF side
>> due to huge processor context.
>> 
>> I revert to sharing PIC in lastest Cset of IPF side,
>> Then We can get performance back.
> 
> We may well have similar degradation on x86 too. The cause is lots of
> unnecessary calls to the set_level hypercall (when the level hasn't
> actually changed). Qemu *definitely* needs to keep shadow wire state
> and only notify Xen on transitions. If the rate of hypercalls is
> still too high (which I think is unlikely) we can use batching
> multicalls. 



I have tried shadow wire state in IPF side, filtered out unnecessary set_level 
hypercalls,
But I can only get about 50% performance degradation back.

In IPF side, I set all interrupt edge( there is no sharing interrupt in my 
environment), so 1->0, 0->0 and 1->1
 is not Passed to xen by hypercall, only 0->1 is passed to xen by hypercall, 
then about half of set_level
hypercalls are saved. But this can only get ~50% performance degradation back.

In previous shared PIC method, it is likely that interrupt and IO finish are 
passed to xen by only one
hypercall xc_evtchn_notify.

But now we may need to use two hypercalls xc_evtchn_notify and set_level 
hypercalls,
I think this is reason of another 50% performance degradation.

Batching multicall may be a good idea,
The only question is how and when we batch xc_evtchn_notify and set_level 
hypercall,

When xc_evtchn_notify is called, there may be several set_level hypercall 
should be called,
But set_level hypercall is based on irq line level, how to "remember" several 
set_level hypercalls?
Maybe we need to change set_level hypercall interface.

> 
>> I prepare to use shared IOSAPIC to deliver interrupt from
>> Qemu to VTI domain.
>> In IPF side, PIC is not needed,
>> In the same time, we can assign more interrupt pins(24) to qemu.
> 
> I moved x86 away from this on purpose, to obtain a clean abstraction.
> I don't think it's a good idea for ia64 to step backwards here.

Actually I don't want to do this, if there are better solution to pull back 
performance.

-- Anthony

> 
>  -- Keir

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel