[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v6 00/16] x86/hvm: I/O emulation cleanup and fix



This patch series re-works much of the code involved in emulation of port
and memory mapped I/O for HVM guests.

The code has become very convoluted and, at least by inspection, certain
emulations will apparently malfunction.

The series is broken down into 16 patches (which are also available in
my xenbits repo: http://xenbits.xen.org/gitweb/?p=people/pauldu/xen.git
on the emulation33 branch).

Previous changelog
------------------

v4:
 - Removed previous patch (make sure translated MMIO reads or
   writes fall within a page) and rebased rest of series.
 - Address Jan's comments on patch #1

v3:
 - Addressed comments from Jan
 - Re-ordered series to bring a couple of more trivial patches to the
   front
 - Backport to XenServer (4.5) now passing automated tests
 - Tested on unstable with QEMU upstream and trad, with and without
   HAP (to force shadow emulation)

v2:
 - Removed bogus assertion from patch #15
 - Re-worked patch #17 after basic testing of back-port onto XenServer


Changelog (now per-patch)
-------------------------

0001-x86-hvm-make-sure-emulation-is-retried-if-domain-is-.patch

v6: Added Andrew's reviewed-by

v5: New patch to fix an issue on staging reported by Don Slutz


0002-x86-hvm-remove-multiple-open-coded-chunking-loops.patch

v6: Addressed Andrew's comments

v5: Addressed further comments from Jan


0003-x86-hvm-change-hvm_mmio_read_t-and-hvm_mmio_write_t-.patch

v6: Added Andrew's reviewed-by

v5: New patch to tidy up types


0004-x86-hvm-restrict-port-numbers-to-uint16_t-and-sizes-.patch

v6: Added Andrew's reviewed-by

v5: New patch to tidy up more types


0005-x86-hvm-unify-internal-portio-and-mmio-intercepts.patch

v6: Added Andrew's reviewed-by and made the modification requested
    by Roger

v5: Addressed further comments from Jan and simplified implementation
    by passing ioreq_t to accept() function


0006-x86-hvm-add-length-to-mmio-check-op.patch

v6: Added Andrew's reviewed-by

v5: Simplified by leaving mmio_check() implementation alone and
    calling to check last byte if first-byte check passes


0007-x86-hvm-unify-dpci-portio-intercept-with-standard-po.patch

v6: Added Andrew's reviewed-by

v5: Addressed further comments from Jan


0008-x86-hvm-unify-stdvga-mmio-intercept-with-standard-mm.patch

v6: Added Andrew's reviewed-by

v5: Fixed semantic problems pointed out by Jan


0009-x86-hvm-limit-reps-to-avoid-the-need-to-handle-retry.patch

v6: Added comment requested by Andrew

v5: Addressed further comments from Jan


0010-x86-hvm-only-call-hvm_io_assist-from-hvm_wait_for_io.patch

v6: Added Andrew's reviewed-by

v5: Added Jan's acked-by


0011-x86-hvm-split-I-O-completion-handling-from-state-mod.patch

v6: Added Andrew's reviewed-by

v5: Confirmed call to msix_write_completion() is in the correct place.


0012-x86-hvm-remove-HVMIO_dispatched-I-O-state.patch

v6: Added Andrew's reviewed-by

v5: Added some extra comments to the commit


0013-x86-hvm-remove-hvm_io_state-enumeration.patch

v6: Added Andrew's reviewed-by

v5: Added Jan's acked-by


0014-x86-hvm-use-ioreq_t-to-track-in-flight-state.patch

v6: Added Andrew's reviewed-by

v5: Added missing hunk with call to handle_pio()


0015-x86-hvm-always-re-emulate-I-O-from-a-buffer.patch

v6: Added Andrew's reviewed-by

v5: Added Jan's acked-by


0016-x86-hvm-track-large-memory-mapped-accesses-by-buffer.patch

v6: Added Andrew's reviewed-by

v5: Fixed to cache up three distict I/O emulations per instruction

Testing
-------

The series was been back-ported to staging-4.5 and then dropped onto the
XenServer (Dundee) patch queue. All automated branch-safety tests pass.

The series as-is has been manually tested with a Windows 7 (32-bit) VM
using upstream QEMU.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.