[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Introduction to VirtIO on Xen project


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: Wei Liu <liuw@xxxxxxxxx>
  • Date: Wed, 27 Apr 2011 10:53:31 +0800
  • Delivery-date: Tue, 26 Apr 2011 19:54:24 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi, all.

I'm Wei Liu, a graduate student from Wuhan University, Hubei, China.
I'm accepted to GSoC 2011 for Xen and responsible for the project
VirtIO on Xen. It's my honor to get accepted and involved in this
wonderful community. I've been doing Xen development for my lab since
late 2009.

As you all know, VirtIO is a generic paravirtualized mainly used in
KVM now. But it should not be too hard to port VirtIO to Xen. When
done, Xen will have access to Linux kernel's VirtIO interfaces and
developers will have an alternative way to deliver PV drivers besides
from the original ring buffer flavor. This project requires: Modify
upstream QEMU, replace KVM-specific interface with generic QEMU
function; Modify Xen / Xentools to support VirtIO; Modify Linux
kernel's VirtIO interfaces.

We must take two usage scenarios into consideration:

1. PV-on-HVM;
2. Normal PV.

These two scenarios require working on different set of functions:

1. XenBus vs VirtualPCI, it's about how to create a channel;
2. PV vs HVM, it's about how events are handled.

Most of the code in VirtIO will be left as-it-is. But the notification
mechanism should be replaced with Xen's event channel. This applies to
QEMU's porting as well.

In the PV on HVM case, QEMU needs to use event channel to get / send
notification and foreign mapping / grant table functions in libxc
/libxl to map memory pages. Virtual PCI bus will be used to establish
a channel between Dom0 and DomU. In some sense, it makes no
differences on the Linux kernel side.

In the normal PV case, QEMU needs to use event channel to get / send
notification, and foreign mapping functions in libxc / libxl to map
memory pages. XenBus / Xenstore will be used to establish a channel
between Dom0 and DomU. Linux VirtIO driver should use Xen's event
channel as kick / notify function.

When the porting is finished, I will carry on some performance tests
with standardized tools such as ioperf, netperf and kernbench.
Testsuites will be run on five different configurations:

1. Native Linux
2. Xen with PV-on-HVM VirtIO support
3. Xen with normal PV VirtIO support
4. Xen with original PV driver support
5. KVM with VirtIO support

A short report will be written based on the results.

This is a brief introduction to the project. Any comments are welcomed.


-- 
Best regards
Wei Liu
Twitter: @iliuw
Site: http://liuw.name

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.