[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v8 3/5] xen/mem_sharing: VM forking
On Fri, Feb 21, 2020 at 7:42 AM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote: > > On 10/02/2020 19:21, Tamas K Lengyel wrote: > > +static int mem_sharing_fork(struct domain *d, struct domain *cd) > > +{ > > + int rc = -EINVAL; > > + > > + if ( !cd->controller_pause_count ) > > + return rc; > > + > > + /* > > + * We only want to get and pause the parent once, not each time this > > + * operation is restarted due to preemption. > > + */ > > + if ( !cd->parent_paused ) > > + { > > + ASSERT(get_domain(d)); > > + domain_pause(d); > > + > > + cd->parent_paused = true; > > + cd->max_pages = d->max_pages; > > + cd->max_vcpus = d->max_vcpus; > > Sorry, I spoke too soon. You can't modify max_vcpus here, because it > violates the invariant that domain_vcpu() depends upon for safety. > > If the toolstack gets things wrong, Xen will either leak struct vcpu's > on cd's teardown, or corrupt memory beyond the end of the cd->vcpu[] array. > > Looking at the hypercall semantics, userspace creates a new domain > (which specifies max_cpus), then calls mem_sharing_fork(parent_dom, > new_dom); Forking should be rejected if toolstack hasn't chosen the > same number of vcpus for the new domain. That's unfortunate since this would require an extra hypercall just to get information Xen already has. I think instead of what you recommend what I'll do is extend XEN_DOMCTL_createdomain to include the parent domain's ID already so Xen can gather these information automatically without the toolstack having to do it this roundabout way. > > This raises the question of whether the same should be true for > max_pages as well. Could you expand on this? Thanks, Tamas _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |