[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/14] XSA-277 followup



On Wed, Nov 21, 2018 at 5:08 PM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>
> On 21/11/2018 22:42, Tamas K Lengyel wrote:
> > On Wed, Nov 21, 2018 at 2:22 PM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 
> > wrote:
> >> On 21/11/2018 17:19, Tamas K Lengyel wrote:
> >>> On Wed, Nov 21, 2018 at 6:21 AM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 
> >>> wrote:
> >>>> This covers various fixes related to XSA-277 which weren't in security
> >>>> supported areas, and associated cleanup.
> >>>>
> >>>> The biggest issue noticed here is that altp2m's use of hardware #VE 
> >>>> support
> >>>> will cause general memory corruption if the guest ever balloons out the 
> >>>> VEINFO
> >>>> page.  The only safe way I think of doing this is for Xen to alloc 
> >>>> annonymous
> >>>> domheap pages for the VEINFO, and for the guest to map them in a similar 
> >>>> way
> >>>> to the shared info and grant table frames.
> >>> Since ballooning presents all sorts of problems when used with altp2m
> >>> I would suggest just making the two explicitly incompatible during
> >>> domain creation. Beside the info page being possibly ballooned out the
> >>> other problem is when ballooning causes altp2m views to be reset
> >>> completely, removing mem_access permissions and remapped entries.
> >> If only it were that simple.
> >>
> >> For reasons of history and/or poor terminology, "ballooning" means two
> >> things.
> >>
> >> 1) The act of the Toolstack interacting with the balloon driver inside a
> >> VM, to change the current amount of RAM used by the guest.
> >>
> >> 2) XENMEM_{increase,decrease}_reservation which are the underlying
> >> hypercalls used by guest kernels.
> >>
> >> For the toolstack interaction side of things, this is a mess.  There is
> >> a single xenstore key, and a blind assumption that all guests know what
> >> changes to memory/target mean.  There is no negotiation of whether a
> >> balloon driver is running in the guest, and if one is running, there is
> >> no ability for the balloon driver to nack a request it can't fulfil.
> >> The sole feedback mechanism which exists is the toolstack looking to see
> >> whether the domain has changed the amount of RAM it is using.
> >>
> >> PV guests are fairly "special" by any reasonable judgement.  They are
> >> fully aware of their memory layout , an of changes to it across
> >> migrate.  "Ballooning" was implemented at a time when most computers had
> >> MB of RAM rather than GB, and the knowledge a PV guest had was "I've got
> >> a random set of MFNs which aren't currently used by anything important,
> >> and can be handed back to Xen on request.  Xen guests also have shared
> >> memory constructs such as the shared_info page, and grant tables.  A PV
> >> guest gets access to these by programming the frame straight into to the
> >> pagetables, and Xen's permission model DTRT.
> >>
> >> Then HVM guests came along.  For reasons of trying to get things
> >> working, they inherited a lot of same interfaces as PV guests, despite
> >> the fundamental differences in the way they work.  One of the biggest
> >> differences was the fact that HVM guests have their gfn=>mfn space
> >> managed by Xen rather than themselves, and in particular, you can no
> >> longer map shared memory structures in the PV way.
> >>
> >> For a shared memory structure to be usable, a mapping has to be put into
> >> the guests P2M, so the guest can create a regular pagetable entry
> >> pointing at it.  For reasons which are beyond me, Xen doesn't have any
> >> knowledge of the guests physical layout, and guests arbitrary mutative
> >> capabilities on their GFN space, but with a hypercall set that has
> >> properties such as a return value of "how many items of this batch
> >> succeeded", and replacement properties rather than error properties when
> >> trying to modify a GFN which already has something in it.
> >>
> >> Whatever the reasons, it is commonplace for guests to
> >> decrease_reservation out some RAM to create holes for the shared memory
> >> mappings, because it is the only safe way to avoid irreparably
> >> clobbering something else (especially if you're HVMLoader and in charge
> >> of trying to construct the E820/ACPI tables).
> >>
> >> tl;dr If you actually prohibit XENMEM_decrease_reservation, HVM guests
> >> don't boot, and that's long before a balloon driver gets up and running.
> > Thanks for the detailed write-up. This explains why I could never get
> > altp2m working from domain start, no matter where in the startup logic
> > of the toolstack I placed the altp2m activation (had to resort to
> > activating altp2m settings only after I detect the guest OS is fully
> > booted and things have settled down).
>
> So, in theory it should all work, even from the start.
>
> In practice, the implementation quality of altp2m leaves a lot to be
> desired, and it was designed to have the "all logic inside the guest"
> model, which in practice means that it only ever started once the guest
> had come up sufficiently.
>
> Do you recall more specifically where you tried inserting startup
> logic?  It sounds like something which wants fixing, irrespective of the
> other concerns here.

Right after the xl toolstack calls xc_dom_boot_mem_init I was trying
to do some funky stuff with gfn remapping in an altp2m view. I
couldn't pinpoint why but the guest wouldn't boot properly and it
would fail at different points shortly afterwards. The nature of the
crashes suggested that the remappings would disappear after some point
in the boot process. So what you say would explain why that would
happen.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.