[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Re-using the x86_emulate_memop() to perform MMIO for HVM.



Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> wrote on 05/04/2006 04:10:06 AM:

>
> On 3 May 2006, at 15:50, Petersson, Mats wrote:
>
> > A third, easier, but less pleasing way to solve it would be retain the
> > current two decode/emulate code-paths, and just add everythign twice
> > when new scenarios are needed to be decoded - I don't quite like this
> > idea, but it certainly is the least amount of effort to implement!
> >
> > Thoughts and comments (aside from the obvious "You should have thought
> > about this earlier!" ;-) would be welcome...
>
> We need an emulator both in Xen and in the device model. The current
> split decode-emulate is pretty barking. My plan for now would be to
> copy x86_emulate.c and plumb it into qemu-dm: so we do duplicate the
> code but it's actually only one source file to maintain.

Would this be sufficient to support real mode in qemu-dm with
x86_emulate in it (at least for Intel) ?

>
> So, Xen would take the page fault and look up the corresponding mmio
> area. If it's an area corresponding to a device emulated by qemu-dm
> then Xen hands off the entire problem. It does not bother to decode the
> instruction at all. Instead it stuffs register state into the shared
> memory area and hands off to qemu-dm in dom0.

Sounds like a good thing.  The path length for MMIO would be more or
less the same as it is now since most of it would be the cost of the
upcall into qemu-dm, which is there anyway.

This would place additional burden on dom0 until all of this
is moved into a mini guest.

I guess the *PIC MMIO and Xen-emulated device I/O would remain
in the hypervisor and use the x86-emulate copy in the hypervisor,
there's no change for those...

Regards,
Khoa


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.