[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] qemu-upstream stubdom - status?



On 08/04/14 17:47, Ian Jackson wrote:
Justin Cormack writes ("Re: qemu-upstream stubdom - status?"):
On Tue, Apr 8, 2014 at 6:03 PM, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> wrote:
The big one there is pthreads but I don't think we actually need that
for qemu-dm, provided we have aio, which it looks like we do ?

The aio syscalls are not in the rump kernel at present. They could
easily be added but not sure that signal notification will work in
rump, although polling should. Unfortunately NetBSD does not have
kqueue aio support yet, which would be ideal.

Hmm.  I was going to say that qemu doesn't want signal notification.
But actually it is doing something with threads.  I think this can
probably be fixed in qemu.

Yea, it probably can be fixed in qemu, but I'd still like to have ~generic(*) pthreads available for the rump kernel driver stack. I started playing with pthreads support. It shouldn't be too difficult in principle, but as usual, details are everything and saying that something isn't difficult tends to jinx it... Meanwhile, don't let that stop you from creating a qemu-specific solution; insight from a "meet-in-the-middle" type of implementation attack never hurts.

I'll also include the aio syscall driver, for the sake of completeness, even if it's not strictly speaking required here.

If there are build problems in the future, check http://builds.rumpkernel.org. We usually fix problems flagged there within an hour or two (does not apply to the NetBSD HEAD build, which also depends on the state of NetBSD HEAD itself).

*) in a cpu-bound thread without any I/O blocking points, the lack of a clock interrupt like scheduling facility is a bit of a bummer. I'm assuming that's not an issue with qemu's use of threads.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.