[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Upstream QEMU based stubdom and rump kernel



On 19 Mar 2015, at 09:35, Antti Kantee <pooka@xxxxxx> wrote:
> 
> On 19/03/15 08:48, Martin Lucina wrote:
>> By "faking out" Anil means a shim to get existing applications
>> which currently use PF_UNIX (and possibly PF_INET, though that will be
>> harder to fake) to use the hypervisor bus to talk to another colocated
>> unikernel instead.
>> 
>> The motivations for this are:
>> 
>> - Taking the TCP stack out of the picture entirely for intra-unikernel
>>   comms (eg. PHP unikernel <-> MySQL unikernel). Both of those could be
>>   thus be linked without the PF_INET component.
>> - This means that you do not need to set up and manage a TCP network in
>>   your infrastructure for intra-unikernel comms, which is a huge advantage
>>   from an operations point of view.
>> - It also means that unikernels which should not be talking TCP to
>>   anywhere, ever, can't do that.
> 
> Aah, ic, you want to do what rumpnet_sockin does, except use the hypervisor 
> bus instead of an external sockets-like networking facility like sockin does.
> 
> rumpnet_sockin was indeed originally developed so that you wouldn't need to 
> include the full TCP/IP stack in a rump kernel, which is nice for scenarios 
> where you want to do networking without configuring anything for each guest 
> instance; running the kernel NFS client in userspace and using the host's 
> network was the original use case.
> 
> Yea, that'll just work on the rump kernel side for PF_INET/PF_INET6 (though 
> you might have to do a bit more handling in your "fake" driver).  Not sure 
> what doing the same for PF_UNIX would entail, if anything special, but only 
> one way to find out.

That's right -- the primary motivation from my end is to short-circuit all the 
unnecessary network stack serialisation and configuration, and end up with a 
very simple data path such as shared memory rings and/or vchan.  The challenge 
is figuring out where to hook in the dynamic lookups required, and what form 
they would take on the coordination bus (XenStore).

One slight hitch with using XenStore for this is that its permissions model 
isn't quite good enough to build a full Plan9-like interface (where every 
listen is published in a per-VM path and can be written to by a connecting VM). 
 Dave Scott had some thoughts on how to extend XS with this, but it wouldn't be 
a short-term solution for working with existing toolstacks.  One workaround is 
to have a trusted arbiter VM running that would coordinate the establishment of 
connections and hand them off.

-anil



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.