[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/x86: p2m: Don't initialize slot 0 of the P2M

At 18:31 +0000 on 03 Feb (1580754711), Julien Grall wrote:
> On 03/02/2020 17:37, George Dunlap wrote:
> > On 2/3/20 5:22 PM, Julien Grall wrote:
> >> On 03/02/2020 17:10, George Dunlap wrote:
> >>> On 2/3/20 4:58 PM, Julien Grall wrote:
> >>>> From: Julien Grall <jgrall@xxxxxxxxxx>
> >>>>
> >>>> It is not entirely clear why the slot 0 of each p2m should be populated
> >>>> with empty page-tables. The commit introducing it 759af8e3800 "[HVM]
> >>>> Fix 64-bit HVM domain creation." does not contain meaningful
> >>>> explanation except that it was necessary for shadow.
> >>>
> >>> Tim, any ideas here?

Afraid not, sorry.  I can't think what would rely on the tables being
allocated for slot 0 in particular.  Maybe there's something later
that adds other entries in the bottom 2MB and can't handle a table
allocation failure?

> > Also, it's not clear to me what kind of bug the code you're deleting
> > would fix.  If you read a not-present entry, you should get INVALID_MFN
> > anyway.  Unless you were calling p2m_get_entry_query(), which I'm pretty
> > sure hadn't been introduced at this point.
> I can't find this function you mention in staging. Was it removed recently?
> The code is allocating all page-tables for _gfn(0). I would not expect 
> the common code to care whether a table is allocated or not. So this 
> would suggest that an internal implementation (of the shadow?) is 
> relying on this.
> However, I can't find anything obvious suggesting that is necessary. If 
> there was anything, I would expect to happen during domain creation, as 
> neither Xen nor a guest could rely on this (there are way to make those 
> pages disappear with the MEMORY op hypercall).

That may not have been true at the time (and so whatever it was that
neede this may have been fixed when it became true?)



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.