[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Improving hvm IO performance by using self IO emulator (YA io-emu?)



Selon Anthony Liguori <aliguori@xxxxxxxxxx>:

> Hi Tristan,
>
> Thanks for posting this.
[...]
> I'm not quite sure that I agree this is the bottleneck.  If IO latency
> were the problem, then a major reduction in IO latency ought to
> significantly improve performance right?
Sure.

It is interesting to note you don't agree.  This appeared to me so obvious.
Maybe should I do measures first and think only after :-)

> KVM has a pretty much optimal path from the kernel to userspace.  The
> overhead of going to userspace is roughly two syscalls (and we've
> measured this overhead).  Yet it makes almost no difference in IO
> throughput.
The path can be split into 2 parts: from trap to ioemu and from ioemu to
real hardware (the return is the same).  ioemu to hardware should be roughly
the same with KVM and Xen.  Is trap to ioemu that different between Xen and
KVM ?

Honestly I don't know.  Does anyone have figures ?

It would be interesting to compare disk (or net) performances between:
* linux
* dom0
* driver domain
* PV-on-HVM drivers
* ioemu

Does such a comparaison exist ?

> The big problem with disk emulation isn't IO latency, but the fact that
> the IDE emulation can only have one outstanding request at a time.  The
> SCSI emulation helps this a lot.
IIRC, a real IDE can only have one outstanding request too (this may have
changed with AHCI).  This is really IIRC :-(

BTW on ia64 there is no REP IN/OUT.  When Windows use IDE in PIO mode (during
install and crash dump), performances are horrible.  There is a patch which
adds a special handling for PIO mode and really improve data rate.

> I don't know what the bottle neck is in network emulation, but I suspect
> the number of copies we have in the path has a great deal to do with it.
This reason seems obvious.


[...]
> There's a lot to like about this sort of approach.  It's not a silver
> bullet wrt performance but I think the model is elegant in many ways.
> An interesting place to start would be lapic/pit emulation.  Removing
> this code from the hypervisor would be pretty useful and there is no
> need to address PV-on-HVM issues.
Indeed this is the simpler code to move.  But why would it be useful ?

> Can you provide more details on how the reflecting works?  Have you
> measured the cost of reflection?  Do you just setup a page table that
> maps physical memory 1-1 and then reenter the guest?
Yes, set disable PG, set up flat mode and reenter the guest.
Cost not yet measured!

> Does the firmware get loaded as an option ROM or is it a special portion
> of guest memory that isn't normally reachable?
IMHO it should come with hvmload.  No needs to make it unreachable.

Tristan.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.