[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XTF PATCH v2] xtf-runner: use xl create -Fc directly



On 08/08/16 14:13, Wei Liu wrote:
On Mon, Aug 08, 2016 at 02:06:37PM +0100, Andrew Cooper wrote:
On 08/08/16 12:24, Wei Liu wrote:
Now that xl create -c is fixed in xen-unstable, there is no need to keep
the hack to get guest console output anymore.

Use xl create -Fc directly, then wait for the xl process to exit.  Print
any error as it occurs.

Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
Sadly, now I think about this further, it does re-introduce the
serialisation problem I was trying specifically trying to avoid.

Can you give an example of the race you wanted to avoid?

I thought with the xenconsole work in place I had solved all races I was
aware of, but maybe I missed something obvious.

It isn't a race.  I do not want synchronously wait for the taredown of domains to complete, as it adds unnecessary latency when running more than a single test.


      
You need to run `xl create -F` so you can sensibly wait on the create
list to avoid tripping up the leak detection.

However, the guest.communicate() call will wait for the guest process to
terminate, which includes all output.

Is there a problem with that?

It makes your "for child in child_list" useless, as all processes have guareteed to exit already.


Therefore, I think we still need the `xl create -Fp`, `xl console`, `xl
unpause` dance, where the create process gets put on the create_list,
and it is the console process which gets communicated with.

This also has the advantage that it doesn't cause ./xtf-runner to break
against all non-staging trees.

I thought we decided to grep log file for that?

Right, but until that happens, this patch constitutes a functional regression.

~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.