WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ppc-devel

Re: [XenPPC] 128 meg domU???


Thanks for teh response.

hollisb@xxxxxxxxxxxxxxxxxxxxxxx wrote on 09/01/2006 03:07:03 PM:

> > I got a series of hypervisor calls, but console output no longer
> > worked.  My assumption was that start_info was moved, but as far as I
> > can tell the memory location is hardcoded in the domain builder.
>
> It's not hardcoded; it's chosen to be among the last pages of the domain
> by create_start_info().

Is it the end of memory, or the end of RMA?  Is there a particular reason that this has to be in the RMA region.  Just curious, if we have post RMA memory, its gonna be a bit strange to have this in the middle of the memory chunk from allocator perspective.

With site down, I can't look at code, but question just occured to me.  Is the start info memory in the number of pages passed in?  If it is, then I have to be careful to not give it to my allocator, since I don't want to allocate it to something.  If its not, then presumably I have to take it into account when turning translate on.  Either way, I probably have a bug here, which one do I have?

> > is there anything else needed to support 128 meg domu?
>
> It sounds like Jimi wants to convince me that our domain memory
> allocation path (i.e. increase_reservation) should be different from
> everybody else's. Right now he's tracking non-RMA memory (for dom0 only)
> in large-page-sized "extents", which is the list_for_each_entry() loop
> you see in pfn2mfn(). If domU allocation populated domain.arch.pe_list,
> I think pfn2mfn() would work. However, the normal memory allocation path
> (that xend uses) doesn't populate this list, and instead populates
> domain.page_list.
>
> The reason Jimi doesn't want to use the page_list is because that's a
> LOT of linked list chasing you would need to do on the "insert mapping"
> code path (to translate guest physical to machine).
>
> x86 has to do something similar with x86 HVM shadow faults; it looks
> like that code centers around domain->arch.phys_table in
> xen/arch/x86/shadow-common.c. That would probably be worth
> investigating.

Even not in shadow mode, x86 must look up page to check validity right?  Would assume, except when you are doing migration, or fine grained management (or for areas where you are doing page flipping) its a bad idea to manage on a per-4K page.  The same arguments would apply for x86 for checking permission and for shadow/PPC where you need to do translation.  However, I think this should be architectural netural, seems like a bad idea to ahve ppc be sepecial.
_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel
<Prev in Thread] Current Thread [Next in Thread>