[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xenbus: Fix loopback event channel assuming domain 0



On Thu, 2011-10-06 at 19:32 +0100, Daniel De Graaf wrote:
> On 10/06/2011 01:53 PM, Ian Jackson wrote:
> > Daniel De Graaf writes ("[Xen-devel] [PATCH] xenbus: Fix loopback event 
> > channel assuming domain 0"):
> >> The xenbus event channel established in xenbus_init is intended to be a
> >> loopback channel, but the remote domain was hardcoded to 0; this will
> >> cause the channel to be unusable when xenstore is not being run in
> >> domain 0.
> > 
> > I'm not sure I understand this.
> > 
> > ...
> >>            /* Next allocate a local port which xenstored can bind to */
> >>            alloc_unbound.dom        = DOMID_SELF;
> >> -          alloc_unbound.remote_dom = 0;
> >> +          alloc_unbound.remote_dom = DOMID_SELF;
> > 
> > The comment doesn't suggest that this is supposedly a loopback channel
> > (ie one for use by the xenbus client for signalling to itself,
> > somehow).
> 
> The event channel being changed here is a loopback event channel exposed in
> /proc/xen/xsd_port, which xenstored binds to. This code is only used for the
> initial domain; otherwise, the shared info page is used.

How does this change impact the regular dom0? It will be expecting a
xenstored to startup locally when in reality it actually needs to wait
for another domain and then connect to that.

Diego Ongaro did some work several years ago on this issue, it was most
recently re-posted by Alex Zeffert, patches against xen-unstable and the
linux-2.6.18 tree:
http://lists.xensource.com/archives/html/xen-devel/2009-03/msg01484.html
http://lists.xensource.com/archives/html/xen-devel/2009-03/msg01488.html

Part of the trick was to fixup the kernel side in a way which was
compatible with both existing Xen releases while also supporting new
releases which support both stub and non-stub xenstore. To do this Diego
setup a lazy xenbus initialisation with a state machine to track which
case was active, with transitions triggered either from the local-mmap
of /proc/xen/xenbus_xsd (so was backwards compatible) or an ioctl called
by the tool which builds the stub domain to tell the dom0 xenbus code
which domain/mfn/evtchn contains the xenstored and dom0's connection to
it (the patcheset includes a cut-down builder which works without
xenstore).

http://lists.xensource.com/archives/html/xen-devel/2009-03/msg01487.html
is the key kernel side patch in that regard.

Diego did some initial work with xenstored in a Linux domU, but I think
the final patchset was stub-xenstored (i.e. ocaml xenstored on minios,
possibly C xenstored on minios too), so I'm not sure about the xenstored
in Linux domU use case.

Some of the more trivial bits of this series were committed but the real
meat wasn't really pushed through.

> 
> > Rather it's supposed to be a channel to xenstore.  So the remote
> > domain should be the xenstore domain, which should come from the
> > shared info page.
> > 
> > Have you actually tested this with a separate xenstored domain ?
> > 
> > Ian.
> > 
> 
> The test setup that exposed this issue is having a non-dom0 Linux domain
> running xenstored (in addition to other services); this domain is started
> with the SIF_INITDOMAIN flag set. It has been tested and can start other
> domains with references back to the xenstored running there; the local
> kernel is able to communicate with the locally running xenstore to provide
> backend services.
> 
> The test for xen_initial_domain() here might better be replaced with a
> check on xen_start_info->store_evtchn which should be a valid event channel
> on all domains except the domain running xenstored. This seems like a more
> elegant solution than relying on the SIF_INITDOMAIN flag to determine the
> location of xenstore.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.