[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XTF PATCH] xtf-runner: fix two synchronisation issues



On 29/07/16 17:18, Ian Jackson wrote:
> The three of us had an IRL conversation.  Here is what I think we
> agreed:
>
> * We intend to make the XTF runner capable of reading
>   xenconsoled-created logfiles.  Both XenRT and osstest configure
>   xenconsoled appropriately.
>
>   The xtf runner will need to
>
>      - on each test, wait for the test domain to shutdown, and then
>
>      - look backwards through the xenconsoled logfile for the
>        indication that the domain started (eg, a banner printed by the
>        domain, or message from xenconsoled), and parse the relevant
>        output there.  (This is needed because starting the same-named
>        domain multiple times results in concatenations of the console
>        output in the xenconsoled guest output logfile.)
>
>      - arrange for the guest to be preserved on crash (at least)
>        so that if the domain crashed, we don't risk parsing the
>        output from a previous run.  Instead we can see it having
>        crashed and report that as a failure.

I suppose it is worth noting that this is by far and away the easiest
solution (we can think of) which also works with older versions of Xen.

>
> * We intend to make `xl create -c' work to find all of the output,
>   by (i) starting the domain paused (ii) spawning xenconsoled
>   (iii) awaiting a new startup indication from xenconsoled
>   (iv) unpausing the domain.  This will be done in xen-unstable.
>
>   I propose the following startup protocol: xl runs
>      xenconsole --startup-notify-fd=FD
>   where FD is the writing end of a pipe.  xl waits for xenconsole to
>   write the byte 0x00 to the FD.  If xenconsoled crashes, xl can tell
>   by the EOF on the pipe.  (This approach saves xl from having to try
>   to wait for either SIGCHLD or fd readability.)
>
>   (The systemd startup notification protocol is too complex to
>   reimplement and would therefore introduce a dependency on
>   libsystemd's sd_notify, which would be awkward.  There is also the
>   upstart SIGSTOP protocol but it could interact badly with an
>   interactive user who uses ^Z.)
>
>   The xtf runner would also be able to use `xl create -c' and simply
>   expect to see all the console output.
>
>   (We also discussed making xenconsole print something to its stdout
>   or stderr when it has successfully connected, and when it
>   disconnects.  While we're editing xenconsole it would probably be
>   nice to do this, but with our plans it's not needed for XTF.)
>
> As a result, the xtf runner can be used with a default install of
> xen-unstable.  For older versions of Xen it will be necessary to
> reconfigure xenconsoled, if the user wants to get reliable pass/fail
> reports from the xtf runner.

I expect that the common usecase will be that people will develop tests
against unstable, and only automated test systems will be running tests
against older versions.

I don't expect it to be common for a human to need to develop tests
against older versions, but if someone does need to, there is at least a
way of doing so.

>
> * We discussed changing xenconsoled to not tear down the guest console
>   until the guest is destroyed, rather than already tearing it down
>   when the guest is shut down.  This is not now needed for the above,
>   but I still think it would be nice.  However it is done, it should
>   arrange that `xl console' doesn't hang waiting for further output
>   from a crashed or shutdown domain (but perhaps should wait for
>   output from a rebooting domain?).  It would probably be a good idea
>   to put this work item in a bucket with `overhaul the console stuff'.

+1.

As for the rest of the email, this matches my understanding from the
conversation.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.