WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Improving hvm IO performance by using self IO emulator (

To: Anthony Liguori <aliguori@xxxxxxxxxx>
Subject: [Xen-devel] Re: Improving hvm IO performance by using self IO emulator (YA io-emu?)
From: tgingold@xxxxxxx
Date: Thu, 22 Feb 2007 21:58:58 +0100
Cc: Tristan Gingold <tgingold@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 22 Feb 2007 12:58:15 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <45DDBF76.1030805@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20070222052309.GA2764@saphi> <45DDBF76.1030805@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Internet Messaging Program (IMP) 3.2.5
Selon Anthony Liguori <aliguori@xxxxxxxxxx>:

> Hi Tristan,
>
> Thanks for posting this.
[...]
> I'm not quite sure that I agree this is the bottleneck.  If IO latency
> were the problem, then a major reduction in IO latency ought to
> significantly improve performance right?
Sure.

It is interesting to note you don't agree.  This appeared to me so obvious.
Maybe should I do measures first and think only after :-)

> KVM has a pretty much optimal path from the kernel to userspace.  The
> overhead of going to userspace is roughly two syscalls (and we've
> measured this overhead).  Yet it makes almost no difference in IO
> throughput.
The path can be split into 2 parts: from trap to ioemu and from ioemu to
real hardware (the return is the same).  ioemu to hardware should be roughly
the same with KVM and Xen.  Is trap to ioemu that different between Xen and
KVM ?

Honestly I don't know.  Does anyone have figures ?

It would be interesting to compare disk (or net) performances between:
* linux
* dom0
* driver domain
* PV-on-HVM drivers
* ioemu

Does such a comparaison exist ?

> The big problem with disk emulation isn't IO latency, but the fact that
> the IDE emulation can only have one outstanding request at a time.  The
> SCSI emulation helps this a lot.
IIRC, a real IDE can only have one outstanding request too (this may have
changed with AHCI).  This is really IIRC :-(

BTW on ia64 there is no REP IN/OUT.  When Windows use IDE in PIO mode (during
install and crash dump), performances are horrible.  There is a patch which
adds a special handling for PIO mode and really improve data rate.

> I don't know what the bottle neck is in network emulation, but I suspect
> the number of copies we have in the path has a great deal to do with it.
This reason seems obvious.


[...]
> There's a lot to like about this sort of approach.  It's not a silver
> bullet wrt performance but I think the model is elegant in many ways.
> An interesting place to start would be lapic/pit emulation.  Removing
> this code from the hypervisor would be pretty useful and there is no
> need to address PV-on-HVM issues.
Indeed this is the simpler code to move.  But why would it be useful ?

> Can you provide more details on how the reflecting works?  Have you
> measured the cost of reflection?  Do you just setup a page table that
> maps physical memory 1-1 and then reenter the guest?
Yes, set disable PG, set up flat mode and reenter the guest.
Cost not yet measured!

> Does the firmware get loaded as an option ROM or is it a special portion
> of guest memory that isn't normally reachable?
IMHO it should come with hvmload.  No needs to make it unreachable.

Tristan.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>