[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v7 02/19] Document ioemu Linux stubdomain protocol

From: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>

Add documentation for upcoming Linux stubdomain for qemu-upstream.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Jason Andryuk <jandryuk@xxxxxxxxx>
Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>

Changes in v6:
 - Add Acked-by: Ian Jackson
 - Replace dmargs with dm-argv for xenstore directory
 - Explain $STUBDOM_RESTORE_INCOMING_ARG for -incoming restore argument
 docs/misc/stubdom.txt | 52 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/docs/misc/stubdom.txt b/docs/misc/stubdom.txt
index 64c77d9b64..c717a95d17 100644
--- a/docs/misc/stubdom.txt
+++ b/docs/misc/stubdom.txt
@@ -75,6 +75,58 @@ Defined commands:
    - "running" - success
+Toolstack to Linux ioemu stubdomain protocol
+This section describe communication protocol between toolstack and
+qemu-upstream running in Linux stubdomain. The protocol include
+expectations of both stubdomain, and qemu.
+Setup (done by toolstack, expected by stubdomain):
+ - Block devices for target domain are connected as PV disks to stubdomain,
+   according to configuration order, starting with xvda
+ - Network devices for target domain are connected as PV nics to stubdomain,
+   according to configuration order, starting with 0
+ - [not implemented] if graphics output is expected, VFB and VKB devices are 
set for stubdomain
+   (its backend is responsible for exposing them using appropriate protocol
+   like VNC or Spice)
+ - other target domain's devices are not connected at this point to stubdomain
+   (may be hot-plugged later)
+ - QEMU command line is stored in
+   /vm/<target-uuid>/image/dm-argv xenstore dir, each argument as separate key
+   in form /vm/<target-uuid>/image/dm-argv/NNN, where NNN is 0-padded argument
+   number
+ - target domain id is stored in /local/domain/<stubdom-id>/target xenstore 
+?? - bios type is stored in /local/domain/<target-id>/hvmloader/bios
+ - stubdomain's console 0 is connected to qemu log file
+ - stubdomain's console 1 is connected to qemu save file (for saving state)
+ - stubdomain's console 2 is connected to qemu save file (for restoring state)
+ - next consoles are connected according to target guest's serial console 
+Environment exposed by stubdomain to qemu (needed to construct appropriate 
qemu command line and later interact with qmp):
+ - target domain's disks are available as /dev/xvd[a-z]
+ - console 2 (incoming domain state) must be connected to an FD and the command
+   line argument $STUBDOM_RESTORE_INCOMING_ARG must be replaced with fd:$FD to
+   form "-incoming fd:$FD"
+ - console 1 (saving domain state) is added over QMP to qemu as "fdset-id 1" 
(done by stubdomain, toolstack doesn't need to care about it)
+ - nics are connected to relevant stubdomain PV vifs when available (qemu 
-netdev should specify ifname= explicitly)
+1. toolstack starts PV stubdomain with stubdom-linux-kernel kernel and 
stubdom-linux-initrd initrd
+2. stubdomain initialize relevant devices
+3. stubdomain starts qemu with requested command line, plus few stubdomain 
specific ones - including local qmp access options
+4. stubdomain starts vchan server on 
/local/domain/<stubdom-id>/device-model/<target-id>/qmp-vchan, exposing qmp 
socket to the toolstack
+5. qemu signal readiness by writing "running" to 
/local/domain/<stubdom-id>/device-model/<target-id>/state xenstore path
+6. now device model is considered running
+QEMU can be controlled using QMP over vchan at 
/local/domain/<stubdom-id>/device-model/<target-id>/qmp-vchan. Only one 
simultaneous connection is supported and toolstack needs to ensure that.
+ - PCI passthrough require permissive mode
+ - only one nic is supported
+ - at most 26 emulated disks are supported (more are still available as PV 
+ - graphics output (VNC/SDL/Spice) not supported



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.