On Fri, 2006-09-01 at 15:26 -0400, Orran Y Krieger wrote:
>
> hollisb@xxxxxxxxxxxxxxxxxxxxxxx wrote on 09/01/2006 03:07:03 PM:
>
> > > I got a series of hypervisor calls, but console output no longer
> > > worked. My assumption was that start_info was moved, but as far as I
> > > can tell the memory location is hardcoded in the domain builder.
> >
> > It's not hardcoded; it's chosen to be among the last pages of the domain
> > by create_start_info().
>
> Is it the end of memory, or the end of RMA? Is there a particular
> reason that this has to be in the RMA region. Just curious, if we
> have post RMA memory, its gonna be a bit strange to have this in the
> middle of the memory chunk from allocator perspective.
Some of those special pages are used for critical functionality like
interrupt delivery and console. Just on principle we want to make sure
that stuff is accessible in real mode.
> With site down, I can't look at code, but question just occured to me.
> Is the start info memory in the number of pages passed in? If it is,
> then I have to be careful to not give it to my allocator, since I
> don't want to allocate it to something. If its not, then presumably I
> have to take it into account when turning translate on. Either way, I
> probably have a bug here, which one do I have?
I'm not sure what you mean by "passed in".
If you're talking about the device tree presented to the domU, currently
there's a single memory node that represents all memory allocated to the
domain. There is also the /xen/start-info property that indicates where
to find the start_info page/structure, and that structure contains the
addresses of the other reserved pages. I suspect your allocator code
needs to be aware of these pages.
If you are going to write code that depends on the device tree, we
should sync up because I intend to change the device tree layout (in
particular remove /xen/start-info and expose its contents via device
tree properties).
> > The reason Jimi doesn't want to use the page_list is because that's a
> > LOT of linked list chasing you would need to do on the "insert mapping"
> > code path (to translate guest physical to machine).
> >
> > x86 has to do something similar with x86 HVM shadow faults; it looks
> > like that code centers around domain->arch.phys_table in
> > xen/arch/x86/shadow-common.c. That would probably be worth
> > investigating.
>
> Even not in shadow mode, x86 must look up page to check validity
> right?
Check validity yes. Translate no, since they're handed an MFN. We need
to do the translation from GPFN to MFN, and a 4KB-granularity linked
list is a terrible data structure to do that with.
> Would assume, except when you are doing migration, or fine grained
> management (or for areas where you are doing page flipping) its a bad
> idea to manage on a per-4K page. The same arguments would apply for
> x86 for checking permission and for shadow/PPC where you need to do
> translation. However, I think this should be architectural netural,
> seems like a bad idea to ahve ppc be sepecial.
It looks like the HVM shadow code somehow handles the translation
without removing the 4KB linked list. I suspect their workaround must be
worse than what they could get by changing the data structure.
Actually I believe the IA64 guys are also using "shadow" mode all the
time as well. Between the three architectures, we may have enough of an
argument to motivate selecting a more appropriate data structure,
perhaps mostly on the grounds of improving HVM performance. ;)
--
Hollis Blanchard
IBM Linux Technology Center
_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel
|