WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again

To: James Harper <james.harper@xxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again
From: Steve Ofsthun <sofsthun@xxxxxxxxxxxxxxx>
Date: Fri, 05 Sep 2008 11:15:43 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, bart brooks <bart_brooks@xxxxxxxxxxx>
Delivery-date: Fri, 05 Sep 2008 08:16:06 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AEC6C66638C05B468B556EA548C1A77D01490563@trantor>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D01490563@trantor>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (X11/20080421)
James Harper wrote:
(Bart - I hope you don't mind me sending your email to the list)

Keir,

As per a recent discussion I modified the IRQ code in the Windows GPLPV
drivers so that only the vcpu_info[0] structure is used, instead of
vcpu_info[current_cpu] structure. As per Bart's email below though, this
has caused him to experience performance issues.

Have I understood correctly that only cpu 0 of the vcpu_info[] array is
ever used even if the interrupt actually occurs on another vcpu? Is this
true for all versions of Xen? It seems that Bart's experience is exactly
the opposite of mine - the change that fixed up the performance issues
for me caused performance issues for him...

While the event channel delivery code "binds" HVM event channel interrupts to VCPU0, the 
interrupt is delivered via the emulated IOAPIC.  The guest OS may program this "hardware" 
to deliver the interrupt to other VCPUs.  For linux, this gets done by the irqbalance code among 
others.  Xen overrides this routing for the timer 0 interrupt path in vioapic.c under the #define 
IRQ0_SPECIAL_ROUTING.  We hacked our version of Xen to piggyback on this code to force all event 
channel interrupts for HVM guests to also avoid any guest rerouting:

#ifdef IRQ0_SPECIAL_ROUTING
   /* Force round-robin to pick VCPU 0 */
   if ( ((irq == hvm_isa_irq_to_gsi(0)) && pit_channel0_enabled()) ||
        is_hvm_callback_irq(vioapic, irq) )
       deliver_bitmask = (uint32_t)1;
#endif

This routing override provides a significant performance boost [or rather 
avoids the performance penalty] for SMP PV drivers up until the time that VCPU0 
is saturated with interrupts.  You can probably achieve the same thing but 
forcing the guest OS to set it's interrupt affinity to VCPU0.  Under linux for 
example, you can disable the irqbalance code.

Steve


Bart: Can you have a look through the xen-devel list archives and have a
read of a thread with a subject of "HVM windows - PCI IRQ firing on both
CPU's", around the middle of last month? Let me know if you interpret
that any differently to me...

Thanks

James



-----Original Message-----
From: bart brooks [mailto:bart_brooks@xxxxxxxxxxx]
Sent: Friday, 5 September 2008 01:19
To: James Harper
Subject: Performance - Update GPLPV drivers -0.9.11-pre12
Importance: High

Hi James,



We have tracked down the issue where performance has dropped off after
version 0.9.11-pre9 and still exists in version 0.9.11-pre12.

Event channel interrupts for transmit are generated only on VCPU-0,
whereas for receive they are generated on all VCPUs in a round robin
fashion. Post 0.9.11-pre9 it is assumed that all the interrupts are
generated on VCPU-0, so the network interrupts generated on other
VPCUs
are only processed if there is some activity going on VCPU-0 or an
outstanding DPC. This caused the packets to be processed out-of-order
and
retransmissions. Retransmissions happened after a timeout (200ms) with
no
activity during that time. Overall it bought down the bandwidth a lot
with
huge gaps of no activity.



Instead of assuming that everything is on CPU-0, the following change
was
made in the xenpci driver in the file evtchn.c in the function
EvtChn_Interrupt()

int cpu = KeGetCurrentProcessorNumber() & (MAX_VIRT_CPUS - 1);

This is the same code found in version  0.9.11-pre9



After this change, we are getting numbers comparable to 0.9.11-pre9 .

Bart


________________________________

Get more out of the Web. Learn 10 hidden secrets of Windows Live.
Learn
Now <http://windowslive.com/connect/post/jamiethomson.spaces.live.com-
Blog-cns!550F681DAD532637!5295.entry?ocid=TXT_TAGLM_WL_getmore_092008>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel