On Fri, 2006-09-01 at 02:02 -0400, Orran Y Krieger wrote:
>
> Been trying to increase the memory used for our library OS to
> 128Meg,
>
> In ppc970.c switched the rma size to 128 meg:
>
>
> unsigned int cpu_default_rma_order_pages(void)
> {
> - return rma_orders[0].order - PAGE_SHIFT;
> + return rma_orders[1].order - PAGE_SHIFT;
> }
As I guess Jimi explained offline, domU is created and configured by
xend. The code you've found is really only used for dom0.
> Also, modified the configuraiton
> # Initial memory allocation (in megabytes) for the new domain.
> memory = 128
>
> I got a series of hypervisor calls, but console output no longer
> worked. My assumption was that start_info was moved, but as far as I
> can tell the memory location is hardcoded in the domain builder.
It's not hardcoded; it's chosen to be among the last pages of the domain
by create_start_info().
> Got from amos a linux, and modified its configuraiton the same way.
AFAICS that "CONFIG_PPC_EARLY_DEBUG_XEN_DOMU" thing in udbg_xen.c is the
only hardcoded address. (That really needs to go away, but I guess it
was useful at one point.)
Did you change anything else?
> It hit a bug in Xen:
>
> (XEN) BUG at mm.c:383
Since the RMA for your domU was still 64MB, and we don't support
post-RMA memory yet, this makes sense.
> I also changed xen to have an rma of 64Meg (i.e., undid the change
> above) and changed just the configuraiton file, and got the same BUG
> at mm.c:383
Yup, same situation, same result.
> is there anything else needed to support 128 meg domu?
It sounds like Jimi wants to convince me that our domain memory
allocation path (i.e. increase_reservation) should be different from
everybody else's. Right now he's tracking non-RMA memory (for dom0 only)
in large-page-sized "extents", which is the list_for_each_entry() loop
you see in pfn2mfn(). If domU allocation populated domain.arch.pe_list,
I think pfn2mfn() would work. However, the normal memory allocation path
(that xend uses) doesn't populate this list, and instead populates
domain.page_list.
The reason Jimi doesn't want to use the page_list is because that's a
LOT of linked list chasing you would need to do on the "insert mapping"
code path (to translate guest physical to machine).
x86 has to do something similar with x86 HVM shadow faults; it looks
like that code centers around domain->arch.phys_table in
xen/arch/x86/shadow-common.c. That would probably be worth
investigating.
--
Hollis Blanchard
IBM Linux Technology Center
_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel
|