[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Ping: [PATCH v2 1/2] x86/PoD: correctly handle non-order-0 decrease-reservation requests
>>> On 20.12.17 at 10:34, <JBeulich@xxxxxxxx> wrote: > p2m_pod_decrease_reservation() at the moment only returns a boolean > value: true for "nothing more to do", false for "something more to do". > If it returns false, decrease_reservation() will loop over the entire > range, calling guest_remove_page() for each page. > > Unfortunately, in the case p2m_pod_decrease_reservation() succeeds > partially, some of the memory in the range will be not-present; at which > point guest_remove_page() will return an error, and the entire operation > will fail. > > Fix this by: > 1. Having p2m_pod_decrease_reservation() return exactly the number of > gpfn pages it has handled (i.e., replaced with 'not present'). > 2. Making guest_remove_page() return -ENOENT in the case that the gpfn > in question was already empty (and in no other cases). > 3. When looping over guest_remove_page(), expect the number of -ENOENT > failures to be no larger than the number of pages > p2m_pod_decrease_reservation() removed. > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx> > --- > v2: Re-written description (by George). Add comments (as suggested > by George). Formatting. > > --- a/xen/arch/arm/p2m.c > +++ b/xen/arch/arm/p2m.c > @@ -388,10 +388,10 @@ int guest_physmap_mark_populate_on_deman > return -ENOSYS; > } > > -int p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, > - unsigned int order) > +unsigned long p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, > + unsigned int order) > { > - return -ENOSYS; > + return 0; > } > > static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a) Stefano, Julien? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |