[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v3 5/6] xen/arm: unpopulate memory when domain is static



On Wed, 27 Apr 2022, Penny Zheng wrote:
> > Hi Penny,
> > 
> > On 27/04/2022 11:19, Penny Zheng wrote:
> > >>> +/*
> > >>> + * Put free pages on the resv page list after having taken them
> > >>> + * off the "normal" page list, when pages from static memory  */
> > >>> +#ifdef CONFIG_STATIC_MEMORY
> > >>> +#define arch_free_heap_page(d, pg) ({                   \
> > >>> +    page_list_del(pg, page_to_list(d, pg));             \
> > >>> +    if ( (pg)->count_info & PGC_reserved )              \
> > >>> +        page_list_add_tail(pg, &(d)->resv_page_list);   \
> > >>> +})
> > >>> +#endif
> > >>
> > >> I am a bit puzzled how this is meant to work.
> > >>
> > >> Looking at the code, arch_free_heap_page() will be called from
> > >> free_domheap_pages(). If I am not mistaken, reserved pages are not
> > >> considered as xen heap pages, so we would go in the else which will
> > >> end up to call free_heap_pages().
> > >>
> > >> free_heap_pages() will end up to add the page in the heap allocator
> > >> and corrupt the d->resv_page_list because there are only one link list.
> > >>
> > >> What did I miss?
> > >>
> > >
> > > In my first commit "do not free reserved memory into heap", I've
> > > changed the behavior for reserved pages in free_heap_pages()
> > > +    if ( pg->count_info & PGC_reserved )that
> > > +        /* Reserved page shall not go back to the heap. */
> > > +        return free_staticmem_pages(pg, 1UL << order, need_scrub);
> > > +
> > 
> > Hmmm... somehow this e-mail is neither in my inbox nor in the archives on
> > lore.kernel.org.
> > 
> 
> Oh.... I just got email from tessian that they held my first commit, and 
> needed my
> confirmation to send. So sorry about that!!!
> 
> I'll re-send my first commit ASAP.

Just FYI I still cannot see the first patch anywhere in my inbox



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.