[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 16/17] libxc/xc_dom_arm: Copy ACPI tables to guest space



Hi Stefano,

On 05/07/16 18:13, Stefano Stabellini wrote:
On Thu, 23 Jun 2016, Julien Grall wrote:
On 23/06/2016 04:17, Shannon Zhao wrote:
From: Shannon Zhao <shannon.zhao@xxxxxxxxxx>

Copy all the ACPI tables to guest space so that UEFI or guest could
access them.

Signed-off-by: Shannon Zhao <shannon.zhao@xxxxxxxxxx>
---
  tools/libxc/xc_dom_arm.c | 51
++++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 51 insertions(+)

diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index 64a8b67..6a0a5b7 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -63,6 +63,47 @@ static int setup_pgtables_arm(struct xc_dom_image *dom)

  /* ------------------------------------------------------------------------
*/

+static int xc_dom_copy_acpi(struct xc_dom_image *dom)
+{
+    int rc, i;
+    uint32_t pages_num = ROUNDUP(dom->acpitable_size, XC_PAGE_SHIFT) >>
+                         XC_PAGE_SHIFT;
+    const xen_pfn_t base = GUEST_ACPI_BASE >> XC_PAGE_SHIFT;
+    xen_pfn_t *p2m;
+    void *acpi_pages;
+
+    p2m = malloc(pages_num * sizeof(*p2m));
+    for (i = 0; i < pages_num; i++)
+        p2m[i] = base + i;
+
+    rc = xc_domain_populate_physmap_exact(dom->xch, dom->guest_domid,
+                                          pages_num, 0, 0, p2m);

Hmmmm... it looks like this is working because libxl is setting the maximum
size of the domain with some slack (1MB). However, I guess the slack was for
something else. Wei, Stefano, Ian, can you confirm?

If I recall correctly, the slack is a magic value coming from the
ancient history of toolstacks.

Does it mean we would need to update the slack to take into account the ACPI blob?

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.