[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] XSA-180 follow-up: repurpose xenconsoled for logging



Wei Liu writes ("XSA-180 follow-up: repurpose xenconsoled for logging"):
> XXX DRAFT DRAFT DRAFT XXX

Thanks!

> Per domain logging via xenconsoled
> ==================================
> 
> As of Xen release XXX, xenconsoled is repurposed to handle logging for
> QEMU. Libxenlight will arrange xenconsoled to create and handle the
> log file. It's possible to expose API(s) so that the user of
> libxenlight can leverage such ability, but it is not currently done.
> 
> Xenstore path and nodes
> -----------------------
> 
> Libxenlight communicates with xenconsoled via a simple xenstore based
> protocol.  All information for a specific domain is stored under
> /libxl/$DOMID/logging. Each log file has its own unique id ($LOGFILEID).

Are these IDs short strings or what ?

> Several xenstore nodes are needed (placed under logging/$LOGFILEID).
> 
>   pipe: the absolute path of the logging pipe
>   file: the absolute path of the file to write to
>   limit: the maximum length of the file before it gets rotated
>   copies: the number of copies to keep
>   state: xenconsoled writes "ready" to this node to indicate readiness

The use of named pipes for rendezvous is a little unusual, but I think
given that xenconsoled is primarily driven by xenstore it is probably
appropriate.

> Xenconsoled will sanitise both pipe and file fields. Pipe has to be
> placed under XEN_RUN_DIR. File has to be placed under /var/log/xen
> (XXX doesn't seem to be configurable at the moment, should introduce
> XEN_LOG_DIR?).

Yes.

NB the pipe has to be mode 600.  Also, opening a fifo has to be done
with care.  For example, the following dance is needed get a
write-only fd without risking blocking:

   fdrw = open("/var/run/xen/DOMAIN.qemu.fifo", O_RDWR);
   fdwo = open("/var/run/xen/DOMAIN.qemu.fifo", O_WRONLY);
   close(fdwr);

But I think qemu should be provided with a O_RDWR fd.  This is because
otherwise, if xenconsoled crashes or is restarted or something, qemu
would get a SIGPIPE (or EPIPE) from writes to its stderr.

It is better for a qemu in this situation to block waiting for a new
xenconsoled to appear.

> 1. Libxenlight:
>   1. Generates a unique log file id $LOGFILEID
>   2. Creates a pipe $PIPE
>   3. Writes parameter to xenstore
>   4. Wait for readiness indication
> 2. Xenconsoled
>   1. Watch global logging and per domain logging xenstore paths
>   2. Gets notified, read parameters from xenstore
>   3. Sanitise parameters
>   4. Create log files
>   5. Connect to the pipe provided
>   6. Write "ready" to xenstore state node

Is it actually necessary to synchronise ?  If xenconsoled is slow, the
approach above will simply block qemu's writes once the pipe buffer is
full, until xenconsoled catches up.

If xenconsoled is missing the lack of synchronisation will result in
qemu blocking, perhaps tripping the domain startup qemu readiness
timeout.

> 3. Libxenlight:
>   1. Detects ready state from xenconsoled
>   2. Open the pipe and return relevant handles to user
> 
> In case of xenconsoled failure, libxenlight will time out and bail.

Would it be possible for a user to connect to xenconsoled and get a
copy of the qemu output with a suitable `xl console' rune ?

> Clean up
> --------
> 
> When doing per domain logging, libxenlight will remove all domain
> specific xenstore paths when a guest is gone. Xenconsoled use that to
> do clean up.
> 
> Libxenlight is responsible for deleting the pipe.

The logfiles will presumably remain.

> Global logging
> --------------
> 
> Since we don't plan to provide new APIs now, we don't support global
> logging because that would require us to provide a cleanup API that
> libxenlight users can call.

I don't understand what you mean by `global logging'.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.