[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/9] xen/arm: Implement set_memory_map hypercall for arm



10.10.2013 14:56, Ian Campbell:
On Fri, 2013-10-04 at 13:43 +0900, Jaeyong Yoo wrote:
From: Evgeny Fedotov <e.fedotov@xxxxxxxxxxx>

When creating domU in toolstack, pass the guest memory
map info to the hypervisor, and the hypervisor stores those info in
arch_domain for later use.

Singed-off-by: Evgeny Fedotov <e.fedotov@xxxxxxxxxxx>
---
  tools/libxc/xc_dom_arm.c     | 12 +++++++-
  tools/libxc/xc_domain.c      | 44 ++++++++++++++++++++++++++++
  tools/libxc/xenctrl.h        | 23 +++++++++++++++
  xen/arch/arm/domain.c        |  3 ++
  xen/arch/arm/mm.c            | 68 ++++++++++++++++++++++++++++++++++++++++++++
  xen/include/asm-arm/domain.h |  2 ++
  xen/include/asm-arm/mm.h     |  1 +
  xen/include/public/memory.h  | 15 ++++++++--
  xen/include/xsm/dummy.h      |  5 ++++
  xen/include/xsm/xsm.h        |  5 ++++
  10 files changed, 175 insertions(+), 3 deletions(-)

diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index df59ffb..20c9095 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -166,6 +166,7 @@ int arch_setup_meminit(struct xc_dom_image *dom)
  {
      int rc;
      xen_pfn_t pfn, allocsz, i;
+    struct dt_mem_info memmap;
dom->shadow_enabled = 1; @@ -191,7 +192,16 @@ int arch_setup_meminit(struct xc_dom_image *dom)
              0, 0, &dom->p2m_host[i]);
      }
- return 0;
+    /* setup guest memory map */
+    memmap.nr_banks = 2;
+    memmap.bank[0].start = (dom->rambase_pfn << PAGE_SHIFT_ARM);
+    memmap.bank[0].size = (dom->total_pages << PAGE_SHIFT_ARM);
+    /*The end of main memory: magic pages */
+    memmap.bank[1].start = memmap.bank[0].start + memmap.bank[0].size;
+    memmap.bank[1].size = NR_MAGIC_PAGES << PAGE_SHIFT_ARM;
Are the 0 and 1 here hardcoded magic numbers?
Well, we hardcoded here two memory regions: the first one for RAM, the second one for "magic pages".
+    return xc_domain_set_memory_map(dom->xch, dom->guest_domid, &memmap);
I think this is using set_memory_map in a different way to it is used
for x86 (where it gives the PV e820 map, a PV version of a BIOS provided
datastructure).
Do you mean that using e820 structure for ARM implementation is better than dt_mem_info structure (that has been taken from libdt) and should be used in ARM implementation of get/set memory map?
The guest should get its memory map via DTB not via a PV hypercall. I
know the guest isn't using get_memory_map but I don't think we should
even make it available.
OK, this hypercall will be available from dom0 only.

On x86 the magic pages are handled specially in the save restore code
via HVM_PARAMS and not exposed to the guest via this memory map call. I
think that's the way to go here too.


We do need a way to remember and return the guest RAM layout (well, for
now the RAM base is hardcoded, so we could fudge it) but I think a
toolstack internal mechanism like a save record would be better.

Ian.

So, get/set memory map hypercall should get/set only main RAM region? Two questions about details of next implementation: 1) Can we restrict memory map number of regions to 1 for ARM? In this case we don't need list or array to define memory map structure. 2)We have already implemented store& restore of HVM params (magic pages PFNs) inside ARM toolstack, but I don't know if we need to migrate contents of such pages. In current ARM toolstack we migrate magic pages contents too as a part of memory map. In x86 code I cannot find where the content of magic pages are saved or restored, so may be Xen developers are familiar about it.

Best regards,
Eugene.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.