|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] netfront module
> Thanks for correction. I was confused by the module APIs used in netfront.
It's a neat feature of Linux that you can use module APIs for drivers and then
have the option of compiling monolithically or modular. That said, we don't
suppose modular builds for the netfront driver at the moment ;-)
> By the way, could you please explain a little how netfront and netback work
> together to achieve packet delivery and transmission?
Sure.
There's some details about our device virtualisation architecture in the OASIS
2004 Paper: http://www.cl.cam.ac.uk/netos/papers/2004-oasis-ngio.pdf
I'll give a lightning overview of the network virt here but to get a really
deep understanding you'd need to read the paper and UTSL (Use The Source
Luke :-)
High level:
* Device drivers are split into a "frontend" and a "backend". The frontend
looks like a network device to a guest kernel. The backend driver looks like
a network device to dom0's kernel. These are connected using "virtual cross
over cable", which is implemented using shared memory and event channels.
In the block and USB drivers, there is a single descriptor ring (circular
buffer) in the shared memory page that is used to transfer IO requests /
responses between the frontend and backend.
In the network driver, there are two rings, for performance reasons. One
carries descriptors regarding packets to transmit, the other contains
descriptors of received packets.
Domain startup:
* The guest's frontend driver participates in a little communiciations
protocol with Xend and the backend driver. This process creates a region of
memory that is shared between the frontend and backend driver, plus an
associated event channel between the two domains. The shared memory region
contains the comms rings.
Runtime:
* Packet transmission from the guest:
- the frontend queues a descriptor in the transmit ring . It then
sends an
event on the event channel to let the backend know something has been queued.
- the backend checks the shared memory region and sees the descriptor.
The
descriptor tells it where the packet is in machine memory. The backend maps
the packet into its memory and copies the headers out of the packet. This
small copy is necessary so that the guest can't change the headers during
processing.
- the backend builds a packet using the copied headers but directly
referencing the data so that it doesn't have to be copied
- the backend stuffs this packet into dom0's networking code, where it
gets
routed / bridged appropriately
* Packet reception by a guest:
- in advance of packets arriving for it, the guest must give some pages
back
to the hypervisor. It queues a descriptor in the receive ring for each of
these pages.
- packets are received first into dom0's memory because it is not known
what
domain they are destined for
- instead of copying them into the guest's memory, the page containing
the
packet is "flipped" to the guest by assigning it ownership. Details of this
page are queued in the receive ring for the guest and it is notified using
the event channel.
- the frontend in the guest passes the packet up to the Linux network
stack.
Every so often, it must return more pages to Xen to compensate for the pages
it receives from dom0.
This might not make too much sense on it's own - you'll have to read it in
conjunction with the source code and the paper referenced above. You might
also like to read docs/misc/blkif-drivers-explained.txt, which is a detailed
description by Andy Warfield of how the block interfaces work.
HTH,
Mark
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|
|
|
|
|