[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 for 4.5] ioreq-server: handle the lack of a default emulator properly



I started porting QEMU over to use the new ioreq server API and hit a
problem with PCI bus enumeration. Because, with my patches, QEMU only
registers to handle config space accesses for the PCI device it implements
all other attempts by the guest to access 0xcfc go nowhere and this was
causing the vcpu to wedge up because nothing was completing the I/O.

This patch introduces an I/O completion handler into the hypervisor for the
case where no ioreq server matches a particular request. Read requests are
completed with 0xf's in the data buffer, writes and all other I/O req types
are ignored.

Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
Cc: Keir Fraser <keir@xxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
---
v3: - Fix for backwards string instruction emulation

v2: - First non-RFC submission
    - Removed warning on unemulated MMIO accesses

 xen/arch/x86/hvm/hvm.c |   35 ++++++++++++++++++++++++++++++++---
 1 file changed, 32 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5c7e0a4..e6611ed 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2386,8 +2386,7 @@ static struct hvm_ioreq_server 
*hvm_select_ioreq_server(struct domain *d,
     if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
         return NULL;
 
-    if ( list_is_singular(&d->arch.hvm_domain.ioreq_server.list) ||
-         (p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO) )
+    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
         return d->arch.hvm_domain.default_ioreq_server;
 
     cf8 = d->arch.hvm_domain.pci_cf8;
@@ -2618,12 +2617,42 @@ bool_t hvm_send_assist_req_to_ioreq_server(struct 
hvm_ioreq_server *s,
     return 0;
 }
 
+static bool_t hvm_complete_assist_req(ioreq_t *p)
+{
+    switch (p->type)
+    {
+    case IOREQ_TYPE_COPY:
+    case IOREQ_TYPE_PIO:
+        if ( p->dir == IOREQ_READ )
+        {
+            if ( !p->data_is_ptr )
+                p->data = ~0ul;
+            else
+            {
+                int i, step = p->df ? -p->size : p->size;
+                uint32_t data = ~0;
+
+                for ( i = 0; i < p->count; i++ )
+                    hvm_copy_to_guest_phys(p->data + step * i, &data,
+                                           p->size);
+            }
+        }
+        /* FALLTHRU */
+    default:
+        p->state = STATE_IORESP_READY;
+        hvm_io_assist(p);
+        break;
+    }
+
+    return 1;
+}
+
 bool_t hvm_send_assist_req(ioreq_t *p)
 {
     struct hvm_ioreq_server *s = hvm_select_ioreq_server(current->domain, p);
 
     if ( !s )
-        return 0;
+        return hvm_complete_assist_req(p);
 
     return hvm_send_assist_req_to_ioreq_server(s, p);
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.