WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] libxl: Specify the target ram size to Qemu (new) whe

To: Xen Devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH] libxl: Specify the target ram size to Qemu (new) when calling it
From: anthony.perard@xxxxxxxxxx
Date: Thu, 16 Dec 2010 14:16:40 +0000
Cc: anthony.perard@xxxxxxxxxx
Delivery-date: Thu, 16 Dec 2010 06:18:21 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
From: Anthony PERARD <anthony.perard@xxxxxxxxxx>

This patch adds target_ram in device_model_info structure, to be used in
libxl_build_device_model_args_new. Qemu upstream needs to know about it.

It introduces also libxl__sizekb_to_mb to convert size from KB to MB by
rounding up the result.

Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
---
 tools/libxl/libxl.c       |    4 ++++
 tools/libxl/libxl.idl     |    1 +
 tools/libxl/libxl_utils.h |    4 ++++
 tools/libxl/xl_cmdimpl.c  |    3 ++-
 4 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index aa28c72..9dfd211 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1384,6 +1384,10 @@ static char ** 
libxl_build_device_model_args_new(libxl__gc *gc,
     else
         flexarray_set(dm_args, num++, "xenfv");
 
+    /* RAM Size */
+    flexarray_set(dm_args, num++, "-m");
+    flexarray_set(dm_args, num++, libxl__sprintf(gc, "%d", info->target_ram));
+
     if (info->type == XENFV) {
         disks = libxl_device_disk_list(libxl__gc_owner(gc), info->domid, &nb);
         for (i; i < nb; i++) {
diff --git a/tools/libxl/libxl.idl b/tools/libxl/libxl.idl
index 8dd7749..89694b1 100644
--- a/tools/libxl/libxl.idl
+++ b/tools/libxl/libxl.idl
@@ -139,6 +139,7 @@ libxl_device_model_info = Struct("device_model_info",[
     ("device_model",     string),
     ("saved_state",      string),
     ("type",             libxl_qemu_machine_type),
+    ("target_ram",       uint32),
     ("videoram",         integer,           False, "size of the videoram in 
MB"),
     ("stdvga",           bool,              False, "stdvga enabled or 
disabled"),
     ("vnc",              bool,              False, "vnc enabled or disabled"),
diff --git a/tools/libxl/libxl_utils.h b/tools/libxl/libxl_utils.h
index 7846c42..940fecd 100644
--- a/tools/libxl/libxl_utils.h
+++ b/tools/libxl/libxl_utils.h
@@ -82,5 +82,9 @@ void libxl_cpumap_set(libxl_cpumap *cpumap, int cpu);
 void libxl_cpumap_reset(libxl_cpumap *cpumap, int cpu);
 #define libxl_for_each_cpu(var, map) for (var = 0; var < (map).size * 8; var++)
 
+static inline uint32_t libxl__sizekb_to_mb(uint32_t s) {
+    return (s + 1023) / 1024;
+}
+
 #endif
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 5555319..3718a5a 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -359,7 +359,8 @@ static void init_dm_info(libxl_device_model_info *dm_info,
 
     dm_info->dom_name = strdup(c_info->name);
     dm_info->device_model = strdup("qemu-dm");
-    dm_info->videoram = b_info->video_memkb / 1024;
+    dm_info->target_ram = libxl__sizekb_to_mb(b_info->target_memkb);
+    dm_info->videoram = libxl__sizekb_to_mb(b_info->video_memkb);
     dm_info->apic = b_info->u.hvm.apic;
     dm_info->vcpus = b_info->max_vcpus;
     dm_info->vcpu_avail = b_info->cur_vcpus;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>