[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)



On Tue, 2015-09-08 at 18:38 +0000, Antti Kantee wrote:
> On 08/09/15 16:15, Ian Campbell wrote:
> > On Tue, 2015-09-08 at 15:03 +0000, Antti Kantee wrote:
> > 
> > > For unikernels, the rump kernel project provides Rumprun, which can
> > > provide you with a near-full POSIX'y interface.
> > 
> > I'm not 100% clear: Does rumprun _build_ or _run_ the application? It
> > sound
> > s like it builds but the name suggests otherwise.
> 
> For all practical purposes, Rumprun is an OS, except that you always 
> cross-compile for it.  So, I'd say "yes", but it depends on how you want 
> to interpret the situation.  We could spend days writing emails back and 
> forth, but there's really no substitute for an hour of hands-on 
> experimentation.
> 
> (nb. the launch tool for launching Rumprun instances is currently called 
> rumprun.  It's on my todo list to propose changing the name of the tool 
> to e.g. rumprunner or runrump or something which is distinct from the OS 
> name, since similarity causes some confusion)

Thanks, I think I get it...


> > Do these wrappers make a rump kernel build target look just like any
> > other
> > ross build target? (I've just got to the end and found my answer, which
> > was
> > yes. I've left this next section in since I think it's a nice summary
> > of
> > why it matters that the answer is yes)
> > 
> > e.g. I have aarch64-linux-gnu-{gcc,as,ld,ar,etc} which I can use to
> > build
> > aarch64 binaries on my x86_64 host, including picking up aarch64
> > libraries
> > and headers from the correct arch specific patch.
> > 
> > Do these rumprun-provided wrappers provide x86_64-rumpkernel
> > -{gcc,as,ld,ar,etc} ?
> 
> No, like I said and which you discovered later, 
> x86_64-rumprun-netbsd-{gcc,as,ld,ar,etc}.  aarch64 would be 
> aarch64-rumprun-netbsd-{...}.

Sorry, I used an explicit example when really I just meant "some triplet"
without saying "such as" or "e.g.".

So the answer to the question I wanted to ask (rather than the one I did)
is "yes", which is good!

> > > If the above didn't explain the grand scheme of things clearly, have a
> > > look at http://wiki.rumpkernel.org/Repo and especially the picture.  If
> > > things are still not clear after that, please point out matters of
> > > confusion and I will try to improve the explanations.
> > 
> > I think that wiki page is clear, but I think it's orthogonal to the issue
> > with distro packaging of rump kernels.
> 
> Sure, but I wanted to get the concepts right.  And they're still not 
> right.  We're talking about packaging for *Rumprun*, not rump kernels in 
> general.

Right.

> > >    However, since a) nobody (else) ships applications as relocatable
> > > static objects b) Rumprun does not support shared libraries, I don't
> > > know how helpful the fact of ABI compatibility is.  IMO, adding
> > > shared
> > > library support would be a backwards way to go: increasing runtime
> > > processing and memory requirements to solve a build problem sounds
> > > plain
> > > weird.  So, I don't think you can leverage anything existing.
> > 
> > This is an interesting point, since not building a shared library is
> > already therefore requiring packaging changes which are going to be at
> > least a little bit rumpkernel specific.
> > 
> > Is it at all possible (even theoretically) to take a shared library
> > (which
> > is relocatable as required) and to do a compile time static linking
> > pass on
> > it? i.e. use libfoo.so but still do static linking?
> 
> But shared libraries aren't "relocatable", that's the whole point of 
> shared libraries! ;) ;)

Hrm, perhaps I'm confusing PIC with relocatable but AIUI a shared library
can be loaded at any address (subject to some constraints) in a process and
may be loaded at different addresses in different processes, which is what
you actually need to do...

> I guess you could theoretically link shared libs with a different ld, 

...this. (assuming you meant "link an app against shared libs")

> and I don't think it would be very different from prelinking shared 
> libs,

Indeed.

>  but as Samuel demonstrated, it won't work at least with an 
> out-of-the-box ld.

Right, I thought it probably wasn't, which is why I said "even
theoretically".

> I think it's easier to blame Solaris for the world going bonkers with 
> shared libs, bite the bullet, and start adding static linking back where 
> it's been ripped out from.  Shared libs make zero sense for unikernels 
> since you don't have anyone to share them with, so you're just paying 
> extra for PIC for absolutely no return.  (dynamically loadable code is a 
> separate issue, if you even want to go there ... I wouldn't)

The issue, and the reason I mentioned it, is that distros (at least Linux
distros) have, for better or worse, gone in heavily for the use of shared
libraries in their application packaging norms.

Actually distros might be (e.g. Debian is) quite good at always providing a
.a as well as the .so when packaging libraries the issue is in the
application packaging which would need modifying to provide a mode where it
was linked against those .a files instead of the .so files.

Since it is only for the final application maybe that's a more tractable
problem in terms of having to modify distro packaging, since you don't need
to follow the build-dep chain at all.

> > Debian are (most likely) not going to accept a second copy of the QEMU
> > source in the archive and likewise they wouldn't want a big source
> > package
> > which was "qemu + all its build dependencies" or anything like that,
> > especially when "all its build dependencies" is duplicating the source
> > of
> > dozens of libraries already in Debian.
> 
> Why do you need a second copy of the sources?  Or are sources always 
> strictly associated with one package without any chances of pulling from 
> a master package?

A given source package (e.g. "qemu.dsc", which incorporates some upstream
qemu.orig.tar.gz and the Debian packaging changes) might build many
different binary packages (.deb files), perhaps including building multiple
times in different configurations, but the upstream source should only
exist once in the archive as part of that source package.

IOW it would not be allowed to create qemu-rumpkernel.dsc which
incorporates another copy of the upstream qemu source (even a different
version). (There are some situations, e.g. multiple incompatible versions
of libraries, where this might be tolerated, but they do not apply to the
situation here).

When a package is being built it can depend on the binary outputs of other
builds, but it cannot depend on the _source_ of another package.

There are a small number of packages where foo.dsc builds a foo-source.deb
file which contains a tarball of the source, but those are quite
specialised use e.g. the Linux package does so to facilitate people
building their own kernel and the bintuils package does so to facilitate
people building cross toolchains for architectures not in Debian. It would
not be possible, in general, to apply this to e.g. the library dependency
chain of QEMU. 

Also note that the artifacts built with that source is generally a local
build of something, not another package contained in the Debian archive.
I'm not sure what the formal policy status of uploading such a thing is,
but it would certainly be considered exceptional.

If another package ("bar") was to build-depend on foo-source.deb and use
the source in there and then foo-source was updated then "bar" would need
rebuilding and re-uploading to rebuild against it.

>   You are going to need two copies of the binaries 
> anyway, so it doesn't seem like a particularly big deal to me, not that 
> I'm questioning your statement.

Hopefully the above has clarified a bit why it is a big deal to (binary)
distros?

> 
> > > If I were you folks, I'd start by getting qemu out of the door
> > 
> > That's certainly the highest priority, at the moment I don't think we
> > actually have a QEMU Xen dm based on rumpkernels which anyone could
> > package
> > anyway, irrespective of how hard that might be.
> > 
> > > , and
> > > worry about generalized solutions when someone wants to ship the
> > > second
> > > unikernel (or third or any small N>1).
> > 
> > Unfortunately I think the N==1 case is tricky already from a distro
> > acceptance PoV. (At least for Binary distros, it's probably trivial in
> > a
> > Source based distro like Gentoo)
> 
> Ok.  I'll help where I can, but I don't think I can be the primus motor 
> for solving the distro acceptance problem for Xen stubdomains.

Sure, I wasn't expecting you to be (sorry if that wasn't clear!)

> If you can say to the packaging system "build with this cross toolchain 
> but disable shared" you're already quite far along,

Sadly I think "but disable shared" is one of those things which doesn't
exist today for most binary distros and which would therefore be a
potential problem.

Considering libraries first, as I mention above at least in Debian the norm
is to build both .a and .so versions of the library. Is the x86_64-rumprun
-netbsd toolchain capable of building a (perhaps pointless) .so file? Is so
then we can just ignore the "but disable shared" and do the normal package
build to get a .a (which we want) and a .so (which is useless).

The considering applications, is it necessary to explicitly disable shared
when using the x86_64-rumprun-netbsd toolchain or will
    x86_64-rumprun-netbsd-gcc -o myapp main.o -lfoo
do the right thing and use libfoo.a instead of libfoo.so without needing
e.g. -static?

If both of those behave in the way I hope then actually I think we are
actually pretty darn close to being able to try something.

>  and it seems like 
> something that shouldn't be too difficult to get reasonable packaging 
> systems to support.  But, details, details.  One major detail is that 
> your target is quite wide, and not everyone along that target can be 
> assumed to be reasonable :/

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.