[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Use of watch_pipe in xs_handle structure


  • To: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • From: Chris Takemura <ctakemura@xxxxxxxxxxx>
  • Date: Thu, 13 Feb 2014 18:09:33 -0800
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Delivery-date: Fri, 14 Feb 2014 02:09:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac8pKc3FkPbfL7hwTeS1yLb6lboPTw==
  • Thread-topic: Use of watch_pipe in xs_handle structure

Hi,

This message was also posted to the qemu-devel list, but I didn't get any
reply, and it occurred to me that it might make more sense here.  Sorry if
you're reading it twice.

Anyway, I'm trying to debug a problem that causes qemu-dm to lock up with
Xen HVM domains.  We're using the qemu version that came with Xen 3.4.2.
I know it's old, but we're stuck with it for a little while yet.

I think the hang is related to thread synchronization and the xenstore,
but I'm not sure how it all fits together. In particular, I don't
understand the lines in xs.c that handle the watch_pipe, e.g.:

        /* Kick users out of their select() loop. */

        if (list_empty(&h->watch_list) &&
            (h->watch_pipe[1] != -1))
            while (write(h->watch_pipe[1], body, 1) != 1)
                continue;


It looks to me like the other thread blocks while reading from the pipe,
and the write allows it to continue.  But this code seems like it does the
same thing as the condvar_signal call that comes slightly after, and
therefore it seems like I could safely #ifndef USE_PTHREAD it out.  Is
this the case?


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.