[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH/RFC] Implement the memory_map hypercall


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: Glauber de Oliveira Costa <gcosta@xxxxxxxxxx>
  • Date: Fri, 24 Nov 2006 12:08:10 -0200
  • Delivery-date: Fri, 24 Nov 2006 06:08:06 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Keir,

Here's a first draft on an implementation on the memory_map 
hypercall. I would like to have comments on this, specially at:

1) I set a new field in the domain structure, and use it whenever it's
set to determine the maximum map. In case it's not, using max_mem will
most probably give us a better bound than tot_pages, as it may allow us
to balloon up later even when using tools that does not call the new
domctl (yet to come) that sets the map limit.

2) However, as it currently breaks dom0, I'm leaving it unimplemented in
this case, and plan to do better than that when you apply the changes
you said you would in dom0 max_mem representation.

I'm currently working on the domctl side of things, but I'd like to have
this sorted out first.

Thank you!

-- 
Glauber de Oliveira Costa
Red Hat Inc.
"Free as in Freedom"
# HG changeset patch
# User gcosta@xxxxxxxxxx
# Date 1164380458 18000
# Node ID da7aa8896ab07932160406c8b19a6ad4a61b3af7
# Parent  47fcd5f768fef50cba2fc6dbadc7b75de55e88a5
[XEN] Implement the memory_map hypercall

It's needed to provide guests with an idea of a physical
mapping that may differ from simply what's needed to fit
tot_pages.

Signed-off-by: Glauber de Oliveira Costa <gcosta@xxxxxxxxxx>

diff -r 47fcd5f768fe -r da7aa8896ab0 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c Fri Nov 17 08:30:43 2006 -0500
+++ b/xen/arch/x86/mm.c Fri Nov 24 10:00:58 2006 -0500
@@ -2976,7 +2976,45 @@ long arch_memory_op(int op, XEN_GUEST_HA
 
     case XENMEM_memory_map:
     {
-        return -ENOSYS;
+        struct xen_memory_map memmap;
+        struct domain *d;
+        XEN_GUEST_HANDLE(e820entry_t) buffer;
+        struct e820entry map;
+    
+        if ( IS_PRIV(current->domain) )
+            return -ENOSYS;
+
+        d = current->domain;
+
+        if ( copy_from_guest(&memmap, arg, 1) )
+            return -EFAULT;
+
+        buffer = guest_handle_cast(memmap.buffer, e820entry_t);
+        if ( unlikely(guest_handle_is_null(buffer)) ) 
+            return -EFAULT;
+
+        memmap.nr_entries = 1;
+
+        /* if we were not supplied with proper information, the best we can 
+         * do is rely on the current max_pages information as a sane bound */ 
+        if (d->memory_map_limit)
+            map.size = d->memory_map_limit;
+        else
+            map.size = d->max_pages << PAGE_SHIFT;
+
+        /* 8MB slack (to balance backend allocations). */
+        map.size += 8 << 20;
+        map.addr = 0ULL;
+        map.type = E820_RAM;
+
+        if ( copy_to_guest(arg, &memmap, 1) )
+            return -EFAULT;
+
+        if ( copy_to_guest(buffer, &map, 1) < 0 )
+            return -EFAULT;
+
+        return 0;
+
     }
 
     case XENMEM_machine_memory_map:
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.