[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v11 09/14] tools/libxl: handle the iomem parameter with the memory_mapping hcall



Currently, the configuration-parsing code concerning the handling of the
iomem parameter only invokes the XEN_DOMCTL_iomem_permission hypercall.
This commit lets the XEN_DOMCTL_memory_mapping hypercall be invoked
after XEN_DOMCTL_iomem_permission when the iomem parameter is parsed
from a domU configuration file, so that the address range can be mapped
to the address space of the domU. The hypercall is invoked only in case
of domains using an auto-translated physmap.

Signed-off-by: Arianna Avanzini <avanzini.arianna@xxxxxxxxx>
Acked-by: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Acked-by: Julien Grall <julien.grall@xxxxxxxxxx>
Cc: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Cc: Paolo Valente <paolo.valente@xxxxxxxxxx>
Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Cc: Jan Beulich <JBeulich@xxxxxxxx>
Cc: Keir Fraser <keir@xxxxxxx>
Cc: Tim Deegan <tim@xxxxxxx>
Cc: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: Eric Trudeau <etrudeau@xxxxxxxxxxxx>
Cc: Viktor Kleinik <viktor.kleinik@xxxxxxxxxxxxxxx>
Cc: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>

---

    v9:
        - Enforce checks on return value of xc_domain_getinfo() as it could
          return information concerning the wrong domain.
        - Don't use PERROR for xc_core_* functions.
        - Avoid emitting an error message if xc_core_auto_translated_physmap()
          fails.

    v8:
        - Fix wrong error message emitted when the memory_mapping DOMCTL is
          called by a domain that is not auto-translated.

    v7:
        - Move the check for an auto-translated physmap to xc to avoid having
          to expose the information to libxl.

    v4:
        - Let the hypercall be called only in case the guest uses an
          auto-translated physmap.

    v3:
        - Add ifdefs to let the hypercall be called only by ARM or x86
          HVM guests, with a whitelist approach.
        - Remove the NOTE comment.

    v2:
        - Add a comment explaining outstanding issues.

---
 tools/libxc/xc_domain.c    | 10 ++++++++++
 tools/libxl/libxl_create.c | 11 +++++++++++
 2 files changed, 21 insertions(+)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c67ac9a..1eba393 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1960,6 +1960,16 @@ int xc_domain_memory_mapping(
     uint32_t add_mapping)
 {
     DECLARE_DOMCTL;
+    xc_dominfo_t info;
+
+    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
+         info.domid != domid )
+    {
+        PERROR("Could not get info for domain");
+        return -EINVAL;
+    }
+    if ( !xc_core_arch_auto_translated_physmap(&info) )
+        return 0;
 
     domctl.cmd = XEN_DOMCTL_memory_mapping;
     domctl.domain = domid;
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 3450337..3b1ec91 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1161,6 +1161,17 @@ static void domcreate_launch_dm(libxl__egc *egc, 
libxl__multidev *multidev,
                  "failed give dom%d access to iomem range %"PRIx64"-%"PRIx64,
                  domid, io->start, io->start + io->number - 1);
             ret = ERROR_FAIL;
+            continue;
+        }
+        ret = xc_domain_memory_mapping(CTX->xch, domid,
+                                       io->gfn, io->start,
+                                       io->number, 1);
+        if (ret < 0) {
+            LOGE(ERROR,
+                 "failed to map to dom%d iomem range %"PRIx64"-%"PRIx64
+                 " to guest address %"PRIx64,
+                 domid, io->start, io->start + io->number - 1, io->gfn);
+            ret = ERROR_FAIL;
         }
     }
 
-- 
2.1.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.