[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v6 2/9] xen: do not free reserved memory into heap


  • To: Julien Grall <julien@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Penny Zheng <Penny.Zheng@xxxxxxx>
  • Date: Mon, 13 Jun 2022 03:27:56 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HfKWvJ0OYjmx4LqA7vYDLNLOfUmHgMZfQzfCSQzUGHQ=; b=EnH9mu6gCVt0IY9PopWguCyBdlsaMSJOzz8aijiTAI7IcZipmFjFlyeZgUxAvFBnBrB0yym4wZ4YFbaFKLHo++BA1cIWcC0HbfYP9DNI74pllt2VDRgpGJGdq3Qibct1IesKXV7SaoQCjxY6uxZsNkFXriAslLpoRBwxXDHkrEjeidGF9NvFpkWR1Zb8qlVlRFza6R4JzK//C4qwGvyBHsH9YyYlvvhQgUSWro/0+0rqSVsKLq5vLT1az3Y9WfqdejzpYtzWJ74rSgc0/uJ+UeGqWAC/hxdTZSJeN4r0DfHZJUioNCsUr48GOdEjr0mLQLbn3KPtPUfQoMDfF2JW+Q==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HfKWvJ0OYjmx4LqA7vYDLNLOfUmHgMZfQzfCSQzUGHQ=; b=FgRR1Mm8prYS09aAaKFYnl5MlJw9sDpTw99rhZli/9gp6LWFhag+Nx1of7aJ+SqGt+xupQVEl+YBGolW0Fx1mJvYCcQBl6okE7SeqK1+Lnm3FKr4HhlWCJzRbHJzX9i554otCPiQPfsif7fATeptVjyjBcLb2AKwRwmYaSXAU/QPnIdOVlce3M7+abQxghHPXx6BsrBVXO4EWNs/G0Vg1R5YYe5B5/brGjwKi9FMAc+cv/Re9AkEnUCu3yhDaMOZybW70cQNJvDNxek7WF1Y6wRQsY+72ZEE9NkO2enFn09+AZPpbVhXR0G/x3oEMkx8iUvmW3q/K/QBxdlXNGWOzQ==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=M7arhBqeI2hOebPQZZuHAQNO28/O78dmIetscEJFYSr0YcianbCkvd07a6kQC8uMO/xYVzKo+kdXq4TWfWlGJVDjUwkiPrUC95onh8eU7T1nCptKUkm1h9muqaFbmyzdXCRdNUC54zmRYTfREN6Kais56DNQEOWsz/L2kiP9IXCKeAsX7GHuR0sNnD696DeNV/TjTNKigkhopCfeWJcZyslvEZ/0IWJ5F4IRRWT7JiXI5/lP5qYCs382qCXbFpIKXHCgDR7rZu8F4DTlXARNoHTdN+LQwkEMUpxMlXSQamWcvVUsxPam0MFT02S23yeVUk6cLm4gpmhFD+5Wz0bhdw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aIuKUulgzn/k6srSXYG8VqAHyOW3zxnWsxZXsWeluuKgwFIkzlw/DPYqvLB/Ht/IiObJopSF8u/W2rHmpZUl1esmBc4p4d8mXibhdd8W3nclsMEmhsEpftnwinobYy1Efim84xrmYmKswdrFGPCQh30Fv2pRV5zxog32GQxgMsDmFpJGzczYDIbUarhHb/OHzStVqszWsnVfOomeOrRBToLdMtJ3oQMSQOny/5D+ElpMoROSvJjYo4jBDR6sYjF3oQ7HHukVSJtY2TKqfgJRZ/Y1G+qLCkyllv8oqjCysH9a6MQv864RN+osDJiDaR/TA70rmT4PMaaDsuJgchQ7Sg==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Wei Chen <Wei.Chen@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Mon, 13 Jun 2022 03:28:39 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHYekCLapi6TfhNLECKo4lt4smsBq1DqSgAgALDbhCAAGOJgIAAFrZg
  • Thread-topic: [PATCH v6 2/9] xen: do not free reserved memory into heap

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xxxxxxx>
> Sent: Thursday, June 9, 2022 5:22 PM
> To: Penny Zheng <Penny.Zheng@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Wei Chen <Wei.Chen@xxxxxxx>; Stefano Stabellini
> <sstabellini@xxxxxxxxxx>; Bertrand Marquis <Bertrand.Marquis@xxxxxxx>;
> Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>; Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>; George Dunlap <george.dunlap@xxxxxxxxxx>;
> Jan Beulich <jbeulich@xxxxxxxx>; Wei Liu <wl@xxxxxxx>
> Subject: Re: [PATCH v6 2/9] xen: do not free reserved memory into heap
> 
> Hi,
> 
> On 09/06/2022 06:54, Penny Zheng wrote:
> >
> >
> >> -----Original Message-----
> >> From: Julien Grall <julien@xxxxxxx>
> >> Sent: Tuesday, June 7, 2022 5:13 PM
> >> To: Penny Zheng <Penny.Zheng@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> >> Cc: Wei Chen <Wei.Chen@xxxxxxx>; Stefano Stabellini
> >> <sstabellini@xxxxxxxxxx>; Bertrand Marquis
> >> <Bertrand.Marquis@xxxxxxx>; Volodymyr Babchuk
> >> <Volodymyr_Babchuk@xxxxxxxx>; Andrew Cooper
> >> <andrew.cooper3@xxxxxxxxxx>; George Dunlap
> >> <george.dunlap@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Wei Liu
> >> <wl@xxxxxxx>
> >> Subject: Re: [PATCH v6 2/9] xen: do not free reserved memory into
> >> heap
> >>
> >> Hi Penny,
> >>
> >
> > Hi Julien
> >
> >> On 07/06/2022 08:30, Penny Zheng wrote:
> >>> Pages used as guest RAM for static domain, shall be reserved to this
> >>> domain only.
> >>> So in case reserved pages being used for other purpose, users shall
> >>> not free them back to heap, even when last ref gets dropped.
> >>>
> >>> free_staticmem_pages will be called by free_heap_pages in runtime
> >>> for static domain freeing memory resource, so let's drop the __init flag.
> >>>
> >>> Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx>
> >>> ---
> >>> v6 changes:
> >>> - adapt to PGC_static
> >>> - remove #ifdef aroud function declaration
> >>> ---
> >>> v5 changes:
> >>> - In order to avoid stub functions, we #define PGC_staticmem to
> >>> non-zero only when CONFIG_STATIC_MEMORY
> >>> - use "unlikely()" around pg->count_info & PGC_staticmem
> >>> - remove pointless "if", since mark_page_free() is going to set
> >>> count_info to PGC_state_free and by consequence clear PGC_staticmem
> >>> - move #define PGC_staticmem 0 to mm.h
> >>> ---
> >>> v4 changes:
> >>> - no changes
> >>> ---
> >>> v3 changes:
> >>> - fix possible racy issue in free_staticmem_pages()
> >>> - introduce a stub free_staticmem_pages() for the
> >>> !CONFIG_STATIC_MEMORY case
> >>> - move the change to free_heap_pages() to cover other potential call
> >>> sites
> >>> - fix the indentation
> >>> ---
> >>> v2 changes:
> >>> - new commit
> >>> ---
> >>>    xen/arch/arm/include/asm/mm.h |  4 +++-
> >>>    xen/common/page_alloc.c       | 12 +++++++++---
> >>>    xen/include/xen/mm.h          |  2 --
> >>>    3 files changed, 12 insertions(+), 6 deletions(-)
> >>>
> >>> diff --git a/xen/arch/arm/include/asm/mm.h
> >>> b/xen/arch/arm/include/asm/mm.h index fbff11c468..7442893e77 100644
> >>> --- a/xen/arch/arm/include/asm/mm.h
> >>> +++ b/xen/arch/arm/include/asm/mm.h
> >>> @@ -108,9 +108,11 @@ struct page_info
> >>>      /* Page is Xen heap? */
> >>>    #define _PGC_xen_heap     PG_shift(2)
> >>>    #define PGC_xen_heap      PG_mask(1, 2)
> >>> -  /* Page is static memory */
> >>
> >> NITpicking: You added this comment in patch #1 and now removing the
> space.
> >> Any reason to drop the space?
> >>
> >>> +#ifdef CONFIG_STATIC_MEMORY
> >>
> >> I think this change ought to be explained in the commit message.
> >> AFAIU, this is necessary to allow the compiler to remove code and
> >> avoid linking issues. Is that correct?
> >>
> >>> +/* Page is static memory */
> >>>    #define _PGC_static    PG_shift(3)
> >>>    #define PGC_static     PG_mask(1, 3)
> >>> +#endif
> >>>    /* ... */
> >>>    /* Page is broken? */
> >>>    #define _PGC_broken       PG_shift(7)
> >>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> >>> 9e5c757847..6876869fa6 100644
> >>> --- a/xen/common/page_alloc.c
> >>> +++ b/xen/common/page_alloc.c
> >>> @@ -1443,6 +1443,13 @@ static void free_heap_pages(
> >>>
> >>>        ASSERT(order <= MAX_ORDER);
> >>>
> >>> +    if ( unlikely(pg->count_info & PGC_static) )
> >>> +    {
> >>> +        /* Pages of static memory shall not go back to the heap. */
> >>> +        free_staticmem_pages(pg, 1UL << order, need_scrub);
> >> I can't remember whether I asked this before (I couldn't find a thread).
> >>
> >> free_staticmem_pages() doesn't seem to be protected by any lock. So
> >> how do you prevent the concurrent access to the page info with the acquire
> part?
> >
> > True, last time you suggested that rsv_page_list needs to be protected
> > with a spinlock (mostly like d->page_alloc_lock). I haven't thought it
> > thoroughly, sorry about that.
> > So for freeing part, I shall get the lock at arch_free_heap_page(),
> > where we insert the page to the rsv_page_list, and release the lock at the
> end of the free_staticmem_page.
> 
> In general, a lock is better to be lock/unlock in the same function because 
> it is
> easier to verify. However, I am not sure that extending the locking from d-
> >page_alloc_lock up after free_heap_pages() is right.
> 
> The first reason being that they are other callers of free_heap_pages().
> So now all the callers of the helpers would need to know whether they need to
> help d->page_alloc_lock.
> 
> Secondly, free_staticmem_pages() is meant to be the reverse of
> prepare_staticmem_pages(). We should prevent both of them to be called
> concurrently. It sounds strange to use the d->page_alloc_lock to protect it (a
> page is technically not belonging to a domain at this point).
> 
> To me it looks like we want to add the pages on the rsv_page_list
> *after* the pages have been freed. So we know that all the pages on that list
> have been marked as freed (i.e. free_staticmem_pages() completed).
> 

Yes! That fixes everything!
If we add the pages on the rsv_page_list *after* the pages have been freed, we
could make sure that page acquired from rsv_page_list has been totally freed.
Hmmm, so in such case, do we still need to add lock here? 
We already could make sure that page acquired from rsv_page_list must be totally
freed, without the lock.
Or in facts, we use the lock to let prepare_staticmem_pages() not fail if 
free_staticmem_pages
happen concurrently?
Trying to understand the intents of the lock here more clearly, hope it's not a 
dumb question~

`> In addition to that, we would need the code in free_staticmem_pages() to be
> protected by the heap_lock (at least so it is match
> prepare_staticmem_pages()).
> 
> Any thoughts?
> 
> Cheers,
> 
> --
> Julien Grall

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.