|
|
|
|
|
|
|
|
|
|
xen-ia64-devel
[PATCH][RESEND]RE: [Xen-ia64-devel] [PATCH 0/6] Add full evtchn mechanis
Hi, Alex,
Actually this is not a patch resend, which instead is a confirmation
that previous patch sets still working on latest tip (Rev 10138). All 5
patches can be applied to tip cleanly with only several lines offset. Tested
with same effect as before. :-)
Thanks,
Kevin
>-----Original Message-----
>From: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
>[mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
>Tian, Kevin
>Sent: 2006年5月18日 21:58
>To: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>Subject: [Xen-ia64-devel] [PATCH 0/6] Add full evtchn mechanism for
>xen/ia64
>
>Hi, all,
> This patch sets will port full event channel mechanism to
>xen/ia64,
>based on which all physical irqs/virtual irqs/ipis are now bound to a
>specific event port. From now on, a typical /proc/interrupts of dom0
>will
>look like:
> CPU0
> 34: 12 Phys-irq ide0
> 39: 0 Phys-irq acpi
> 45: 322 Phys-irq serial
> 48: 115006 Phys-irq peth0
> 49: 16269 Phys-irq ioc0
> 50: 31 Phys-irq ioc1
> 51: 2 Phys-irq ehci_hcd:usb1
> 52: 0 Phys-irq uhci_hcd:usb2
> 53: 55 Phys-irq uhci_hcd:usb3
>256: 0 Dynamic-irq RESCHED0
>257: 0 Dynamic-irq IPI0
>258: 44572 Dynamic-irq timer0
>259: 2316 Dynamic-irq xenbus
>260: 8304 Dynamic-irq blkif-backend
>261: 25947 Dynamic-irq vif3.0
>ERR: 0
>
>Then for SMP domU:
> CPU0 CPU1
>256: 1417 0 Dynamic-irq RESCHED0
>257: 40 0 Dynamic-irq IPI0
>258: 4937 0 Dynamic-irq timer0
>259: 0 1691 Dynamic-irq RESCHED1
>260: 0 165 Dynamic-irq IPI1
>261: 0 4953 Dynamic-irq timer1
>262: 220 0 Dynamic-irq xenbus
>263: 189 0 Dynamic-irq xencons
>264: 3493 0 Dynamic-irq blkif
>265: 128 0 Dynamic-irq eth0
>
> This patch set is tested upon current tip (Rev10021 "[IA64]
>pte_xchg
>added"), including:
>Non-vp + UP domU + domVTI
>Non-vp + SMP domU + domVTI
>VP + UP domU + domVTI
>VP + SMP domU + domVTI
>
> I also tested network performance by wget with several combos:
>(The result is the average of three tests)
> TIP With patches
>Dom0 before 'xend start' 11.21Mb 11.21Mb
>Dom0 after 'xend start' 11.21Mb 11.21Mb
>Single domU 5.49Mb 5.63Mb
>Single dom0 (domU is up) 11.21Mb 11.21Mb
>Both domU + dom0 9.62Mb/1.28Mb 9.63Mb/1.45Mb
>
> Based on above result, dom0's net performance is not affected,
>while domU's is seen with several percents increase. Actually even
>dom0 should be better because I observed the ITC cycle for single
>eth0 irq handling decreased by 35%. But because previously dom0's
>wget is already equal to native, so we don't observe difference on the
>result.
> That means this patch can even help performance a bit at the
>same
>time when it mainly targets to add new feature to support driver
>domain. :-)
>
>Following is the patch set briefs:
>[PATCH 1/6] 0_pull_header_files.patch
>[PATCH 2/6] 1_add_callback_ops.patch
>[PATCH 3/6] 2_add_evt_handle_path.patch
>[PATCH 4/6] 3_clean_pirq_bind_logic.patch
>Above 4 patches only serve as prep steps with added logics functionally
>disabled because no place uses them.
>(I've also tested with tip to find nothing broken with them.)
>
>[PATCH 5/6] evtchn_common.patch
>This patch modifies common code and I've sent to xen-devel which has
>been checked into xen-unstable.hg now. So Alex, could you do a favor
>to make a sync with xen-unstable.hg first? :-)
>
>[PATCH 6/6] 4_final_evtchn_support.patch
>This patch is the very one to move xen/ia64 to full evtchn mechanism,
>and once after applying it, you could observe above example output.
>
>Cheers,
>Kevin
>
>_______________________________________________
>Xen-ia64-devel mailing list
>Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-ia64-devel
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [PATCH][RESEND]RE: [Xen-ia64-devel] [PATCH 0/6] Add full evtchn mechanism for xen/ia64,
Tian, Kevin <=
|
|
|
|
|