[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re-using the x86_emulate_memop() to perform MMIO for HVM.



As stated earlier, I started looking at a using the x86_emulate_memop()
to support HVM's memory-mapped IO emulation - since that would remove
the need to have two x86 instruction decode paths, and a whole heap of
other semi-duplicated code. Since these are fairly major chunks of code
in each place (not to mention that they are not entirely trivial to
understand in either place), it would be a "Good Thing(tm)" to combine
those bits of code. 

I got as far as I can clear the screen with the BIOS, but I ran into a
bit of a problem: The mmio-request that goes to QEMU needs to be ONE
"atomic" operation, as when we send the request to QEMU, Xen schedules,
and eventually comes back to xxx_resume(), which is not where we need to
be to continue a read-modify-write operation. 

In xen/x86/hvm/platform.c, this is solved by figuring out the entire
operation, and based on that, does a RMW operation to QEMU (such as
IOREQ_TYPE_AND) and we therefore don't have to wait for the read
operation to finish before continuing the write phase. 

As I see it, there's several possibilities to solve this, but none of
them are particularly trivial to implement. 

The easiest one would be to supply a bigger set of function pointers to
x86_mem_emulator, such as and_emulated, or_emulated, xor_emulated, etc.
We could make those optional, and choose the "old" or "new" method based
on the pointer being set to something or not. 

Another possibility would be to split the x86_emulate_memop() up, so
that we can point schedule_tail to the second half of it if necessary -
but I definitely don't like this idea [I am not sure that it would even
work to do this - I haven't actually looked at it]. 

A third, easier, but less pleasing way to solve it would be retain the
current two decode/emulate code-paths, and just add everythign twice
when new scenarios are needed to be decoded - I don't quite like this
idea, but it certainly is the least amount of effort to implement!

Thoughts and comments (aside from the obvious "You should have thought
about this earlier!" ;-) would be welcome... 

--
Mats


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.