[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: Linux Stubdom Problem



On Fri, 2 Sep 2011, Tim Deegan wrote:
> At 10:32 +0800 on 02 Sep (1314959538), Jiageng Yu wrote:
> > 2011/9/2 Tim Deegan <tim@xxxxxxx>:
> > > I would really rather not have this interface; I don't see why we can't
> > > use grant tables for this.
> > 
> >     In linux based stubdom case, we want to keep hvm guest and its
> > hvmloader unaware of running on stubdom.
> 
> Why?  HVMloader is already tightly coupled to the hypervisor and the
> toostack - special cases for stubdoms should be fine.

I think think that leaking the implementation details of the device
model into hvmloader should be avoided, but obviously if there are no
alternatives, it can be done.


> > Therefore, we do need a way
> > to map vram pages of stubdom into guest hvm transparently.
> 
> I've suggested two so far: have grant mappings done from inside the
> guest, or add a XENMAPSPACE that takes grant IDs.  I think the
> XENMAPSPACE is better; I suspect that save/restore will be easier to get
> right that way.

OK. I think we'll try the other approach first to see if it is easier:
modify Linux xen-fbfront driver to take a list of pages from the guest
for the vram.


> >    Another idea is to allocate vram in hvm guest and stubdom maps vram
> > pages into its memory space.
> 
> Sure.  The minios-based stubdoms seem to manage that just fine.  If this
> is really difficult for a linux-based stub domain, then maybe that's a
> reason not to use them.

We could fully re-implement xen-fbfront in userspace inside qemu, at
that point the problem would go away completely.
Rather than duplicating all that code, we'll try to reuse Linux
xen-fbfront implementation, making sure that xen-fbfront is loaded after
qemu is started and initialized.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.