[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH tip/x86/mm] x86_32: calculate additional memory needed by the fixmap

stefano.stabellini@xxxxxxxxxxxxx writes ("[Xen-devel] [PATCH tip/x86/mm] 
x86_32: calculate additional memory needed by the fixmap"):
> Unfortunately this code is rather complex and depends on the behaviour
> of other functions but I hope to have covered all the corner cases.

I'm sorry to say that think this is really the wrong approach.
I agree that *some* change needs to be made, as this is currently a
serious regression in tip/x86/mm.

But the correct change is in fact to undo the reversion of
  "x86,xen: introduce x86_init.mapping.pagetable_reserve"
That was a hook with a reasonable and defined interface.

What we are having instead, now, is a fragile piece of code that tries
to second-guess the complex config- and hardware-dependent
memory-allocation behaviour of the rest of the startup code.  This is
a recipe for making things break.

Indeed, since the reversion of "mapping.pagetable_reserve" and its
replacement with the new "exact calculation" code, tip/x86/mm has
already had to have two separated config-dependent fixes - and the
thing hasn't even been pushed to Linus's tip yet !

This is a hopelessly fragile approach.  We should go back to the new
pvop.  If the interface documentation in 279b706bf800b596 is not
satisfactory I would be happy to help improve it.  To be honest I
think much of the contents of the commit message should be in comments
in the code.

Indeed, I agree that the current lack of coherent semantic
specification for the pvops is a problem.  But the solution is not to
abolish pvops in favour of fragile duplication of logic.  The solution
is to fix the specification comments.

But, if the x86 maintainers absolutely won't have the new pvop then
this new second-guessing code, fragile as it is, has to go in to fix
the regression.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.