WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ppc-devel

Re: [XenPPC] [PATCH 3 of 6] [PATCH] xen: implement guest_physmap_max_mem


On Feb 22, 2007, at 5:07 PM, Ryan Harper wrote:

* Hollis Blanchard <hollisb@xxxxxxxxxx> [2007-02-22 15:01]:
On Wed, 2007-02-21 at 18:17 -0500, Ryan Harper wrote:
4 files changed, 72 insertions(+)
xen/arch/powerpc/domain.c | 60 ++++++++++++++++++++++++++ ++++++++++++
xen/arch/powerpc/domain_build.c  |    5 +++
xen/include/asm-powerpc/domain.h |    3 +
xen/include/asm-powerpc/shadow.h |    4 ++


# HG changeset patch
# User Ryan Harper <ryanh@xxxxxxxxxx>
# Date 1172103252 21600
# Node ID 35fd77200dff7e73fe3959b5dbfa6088c691c502
# Parent  84ec1b4d5cd50cc9d49202eb978a4715c4780e28
[PATCH] xen: implement guest_physmap_max_mem for ppc

Signed-off-by: Ryan Harper <ryanh@xxxxxxxxxx>

diff -r 84ec1b4d5cd5 -r 35fd77200dff xen/arch/powerpc/domain.c
--- a/xen/arch/powerpc/domain.c Wed Feb 21 18:14:12 2007 -0600
+++ b/xen/arch/powerpc/domain.c Wed Feb 21 18:14:12 2007 -0600
@@ -33,6 +33,7 @@
 #include <asm/htab.h>
 #include <asm/current.h>
 #include <asm/hcalls.h>
+#include <asm/shadow.h>
 #include "rtas.h"
 #include "exceptions.h"

@@ -347,3 +348,62 @@ void idle_loop(void)
         do_softirq();
     }
 }
+
+int do_guest_physmap_max_mem(struct domain *d, unsigned long new_max)

Could you rename "new_max" to "new_max_pages" so we can keep the units straight? (I know they use "new_max" in the XEN_DOMCTL_max_mem handler.)

Yep.


+{
+    ulong *p2m_array = NULL;
+    ulong *p2m_old = NULL;
+    ulong p2m_pages;
+    ulong copy_end = 0;
+
+    /* we don't handle shrinking max_pages */
+    if ( new_max < d->max_pages )
+    {
+        printk("Can't shrink %d max_mem\n", d->domain_id);
+        return -EINVAL;
+    }

We won't be called in this case, so this test can be removed.

OK.

+    /* our work is done here */
+    if ( new_max == d->max_pages )
+        return 0;
+
+    /* check new_max pages is 16MiB aligned */
+    if ( new_max & ((1<<12)-1) )
+    {
+        printk("Must be 16MiB aligned increments\n");
+        return -EACCES;
+    }

The 16MB thing is because the 970's large page size is 16MB, and the
kernel uses large pages to map its text. That said, I don't see why this should be enforced by Xen when setting max_mem (if ever). Stylistically,
I also object to the literals used here.

Great.  I told myself that I was going to replace the literals since I
was guessing that there might be a common check like cpu_aligned().


+    /* make a p2m array of new_max mfns */
+    p2m_pages = (new_max * sizeof(ulong)) >> PAGE_SHIFT;
+    /* XXX: could use domheap anonymously */
+ p2m_array = alloc_xenheap_pages(get_order_from_pages (p2m_pages));

It is a shame that these pages will be naturally aligned, which is unecessary.

+    if ( p2m_array == NULL )
+        return -ENOMEM;

I think the Xen heap is on the small side. Do you have an idea of how
much memory we have available? I suppose we can change it later if we
exhaust the heap.

In the later "heap mail-thread" I suggest that you add nr_pages*sizeof (p2m_pages[]) to the xenheap calculation, that will make sure the xenheap is a good size.


IIRC, when dom0 boots with 192MB (the default) I usually see ~19MB of
heap available in the boot messages on js20.  Looking at js21, I see:

(XEN) Xen Heap: 135MiB (138548KiB)

RMA different size on js21?



We had talked about using ints for the p2m array, since that would only
limit us to 44 bits of physical memory. Did you decide to use longs
instead?

No, just being lazy.  I wanted to get the patches out for comment ASAP
but I forgot to note that we were going to use u32 in the mails.  I
still plan to switch p2m array to smaller size.


+    /* copy old mappings into new array if any */
+    if ( d->arch.p2m != NULL )
+    {
+        /* mark where the copy will stop (which page) */
+        copy_end = d->max_pages;
+
+        /* memcpy takes size in bytes */
+ memcpy(p2m_array, d->arch.p2m, (d->max_pages * sizeof (ulong)));
+
+        /* keep a pointer to the old array */
+        p2m_old = d->arch.p2m;
+    }

This memcpy could be pretty slow; might be better if we could make this a continuation some day. If you agree, could you add a comment to that
effect?

Good point.
And in that case it should be a copy_page() loop.

-JX


_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel

<Prev in Thread] Current Thread [Next in Thread>