[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/hvm: Fix double free from vlapic_init() early error path


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 31 Mar 2021 15:57:57 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y/P5L6W9u5PncH7s10IMUlBsuZUVrseB6TT68j/XCYE=; b=kWGBM9AypmFcMwu1VF/IVajA7pb2qI7WhNCOA5mBvmZKntF4HDGd3uMSoMX4opHKMq45GTy/HNjFfb3OQp/kVhiKV37lQr4TSlbrdLflevuleg7bvLoY7iy0bITtOxcXqu1iUOfO2fzL+QjnFVEoq9F2OGw3/xgMa9DeYUl2yctIJndOBxKIWFPckk9ZNFegd/cJsh5+VpTnikrDA6BsLHbTRhu9d7rXQZSUHFxHpCcxvwJxVWzmmSTFAbHEy+6KH1oz9gi/SbqZiAOwVqlVS0scYBUzeeqVumg1F59n27XC9HEP1lzkyyJuyWBgTNMe1ovTrciq/gD8lh7nqBIoLw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=I5W81CPDrkCkr/2k0qh6lIeVsdIfaI1m0dUAVnrA359SoDsxLv2+A1gAVcLuQ9BQJEcMQtBDDVT8KRULff7v6Kn/Zf/fC0EaY1wK053yfPIHXHJ4jS6KAyWpbsOn734Fn2YSUlOqZrxE/80jS6vTIRL59+5mBXZk9w1vU1HQji9zThaKo6NuwTPsINOh66NSle0AGpuNNu08eEczK72qU6bTxkMeW/kanO2uEp08Pt648cvATvGQ0/Sldwvpg9ZZZRdaqOAo9xpSnFpS68HH4bLPo01Nq+MjHjkA9flUjDq2X5e9k5qTbBLFJkVqtH9LRb3oqzqW5VU9QpUg5UcyAw==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Wed, 31 Mar 2021 13:58:08 +0000
  • Ironport-hdrordr: A9a23:P3GOkq3PyOC3KiE9Z7MXbgqjBQJ3eYIsi2QD101hICF9Wvez0+ izgfUW0gL1gj4NWHcm3euNIrWEXGm0z/FIyKErF/OHUBP9sGWlaLtj44zr3iH6F0TFmtJ1/Z xLN5JzANiYNzVHpO7n/Qi1FMshytGb8KauwdzT1WtpUBsCUcBdxi1SYzzrdXFebg9AGJY/Cd 6w5tBfoSChZHQQaa2AdwM4dsLEoMDGk4+jXAUPAAQp5BLLoTSj7rP7FBbw5GZibxpkx7A+/W /Z1zHo/6nLiYDB9jbw9U/2q65Xltzo18dZCKW35PQ9Bz3whm+TFeZccpKYujRdmpDL1H8Ll5 32rw4kL4BP7RrqDx2IiD/M/yWl7zo08X/lzjaj8AveiOj0XigzBcYEpa8xSGqh12MasNtx0L 1G0gui3vI9Z3Ow/1WO2/HyWx5njUayq3Y5+NRj90B3aocCdKRX6bUW4UI9KuZwIAvB9IslHO NyZfusgsp+TFXyVQG8gkBfhPaoXng1Ay6cRFkDtsG/w1Ft7Q5E5npd68oFknga8pUhD7FC+u TfK6xt0IpDV8kMcMtGdas8aPryLlaIbQPHMWqUL1iiPKYbO0jVo5qyxLku/umldLEB0ZNaou WPbHpo8UoJP27+A8yH25NGtjrXRn+mYDjrwsZCo7Bkp7zVXtPQQG2+YWFrt/Hlj+QUA8XdVf r2EolRGeXfIWznHpsM9xHiWqNVNWIVXKQuy5cGcmPLhviOBpzht+TdfvqWDqHqCywYVmT2BW ZGcyP0IOlG80C3Sl71iBXcQBrWCw7C1KM1NJKf0/kYyYALOIEJmBMSk06F6saCLiAHkqFeRj o6HJrX1oeA4UWm92fB6GtkfjBHCFxO3bnmW3RW4SsDM0b+d6c/q8ySEFoimEevF1tadYf7AQ Rfr1N49eacNJqL3x0vDNqhLya8g2YMommJC7MRgLeK68ugWp5QNOdpZIVBUSHwUzBlkwdjr2 lOLCUeQFXEKz/ogaK5yLoOBO/ecNF4qByxIdFdrE/esUn0n7BtelIrGxqVFeKHiwcnQDRZwn dr9bUEvbaGkTGzbVckjP8AK11KYmSPCLdgBACIDb8k3YzDSUVVdyOnlDaagxY8di7P+18Jjm LsFyGSZMrGG0FQoHxez6bs/m5lb2n1RTMCVllK9alGUUjWsHd61uGGIpC+1GaccXMu6OAQOj OtW0pYHipeg/SMkDKFkjeLEnsrgqg0NuvGFbI5bvX4wXW2MrCFkqkAAt5Z9JtoL8rVr+cOSO 6TEjXlag/QOqcM4Ui4t3wlMC57pD0YivvuwgTi93X983glA/beSW4WMY0zEpW51SzDSPmJ2p ki0o5wkuu0L2nratmJjYvQdCVOLxvPoWiwC8EkwKokyZ4ahf9WJd38VzCN6VRsmDMZB+3wnF kFQKt67KvaU7UfN/A6SmZ8xB4RiN+LLEEXqQT4De81QEE1gxbgTqa0youNjYBqP1aIqwTxM2 SO6iFx///KWC2YyL4RYphAVlh+WQwZ6H54+vmFeJCVIAK2d/tb9F7SCA72TJZtDIyEE64XtB B0/pWhmPKWbTPx3ET1sSFgKqxDt0ahTsXaOnPAJcd4t/i7M0+LmK2k/Yqaiyr2UyKybwAgvr J+HHZgJ/hru30Fl4040i+7V6zxrAYEqjJlkE5av2+o/JOn7mfdFVxBKivDjPxtLGBuDkQ=
  • Ironport-sdr: rAk9kULo2orzUaE07Zrs+3pf5j78MsNmUIcc4LAZehhPcQvZHnrND6HSogFIRUOlIUA/LScm4v r1M3ghrmo0W2sRJWPM5D4EtsaJZw5NfzAzu7maOYEGJ6KtlAkqLhC6CPFdVX8BhAXIzDITAbSI jq1TLeJqxA/z1DBSnI5/XHTycuKeZekxn6jPGBl1f74B7vcN8k5qYzsWSBg6vMvh3sNFPPyiV0 K6hiH+hjwyeALYJq4wYzYUof7UNUKKP7iA/CKhRt96FMQv4w4+VkSTt+t+kU4WXY+g3nngZosb S3s=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Mar 31, 2021 at 02:56:27PM +0100, Andrew Cooper wrote:
> On 31/03/2021 14:49, Roger Pau Monné wrote:
> > On Wed, Mar 31, 2021 at 02:31:25PM +0100, Andrew Cooper wrote:
> >> vlapic_init()'s caller calls vlapic_destroy() on error.  Therefore, the 
> >> error
> >> path from __map_domain_page_global() failing would doubly free
> >> vlapic->regs_page.
> >>
> >> Rework vlapic_destroy() to be properly idempotent, introducing the 
> >> necessary
> >> UNMAP_DOMAIN_PAGE_GLOBAL() and FREE_DOMHEAP_PAGE() wrappers.
> >>
> >> Rearrange vlapic_init() to group all trivial initialisation, and leave all
> >> cleanup to the caller, in line with our longer term plans.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> >> ---
> >> CC: Jan Beulich <JBeulich@xxxxxxxx>
> >> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> >> CC: Wei Liu <wl@xxxxxxx>
> >> ---
> >>  xen/arch/x86/hvm/vlapic.c     | 11 ++++-------
> >>  xen/include/xen/domain_page.h |  5 +++++
> >>  xen/include/xen/mm.h          |  6 ++++++
> >>  3 files changed, 15 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> >> index 5e21fb4937..da030f41b5 100644
> >> --- a/xen/arch/x86/hvm/vlapic.c
> >> +++ b/xen/arch/x86/hvm/vlapic.c
> >> @@ -1608,7 +1608,9 @@ int vlapic_init(struct vcpu *v)
> >>          return 0;
> >>      }
> >>  
> >> +    /* Trivial init. */
> >>      vlapic->pt.source = PTSRC_lapic;
> >> +    spin_lock_init(&vlapic->esr_lock);
> >>  
> >>      vlapic->regs_page = alloc_domheap_page(v->domain, MEMF_no_owner);
> >>      if ( !vlapic->regs_page )
> >> @@ -1616,17 +1618,12 @@ int vlapic_init(struct vcpu *v)
> >>  
> >>      vlapic->regs = __map_domain_page_global(vlapic->regs_page);
> >>      if ( vlapic->regs == NULL )
> >> -    {
> >> -        free_domheap_page(vlapic->regs_page);
> >>          return -ENOMEM;
> >> -    }
> >>  
> >>      clear_page(vlapic->regs);
> >>  
> >>      vlapic_reset(vlapic);
> >>  
> >> -    spin_lock_init(&vlapic->esr_lock);
> >> -
> >>      tasklet_init(&vlapic->init_sipi.tasklet, vlapic_init_sipi_action, v);
> >>  
> >>      if ( v->vcpu_id == 0 )
> >> @@ -1645,8 +1642,8 @@ void vlapic_destroy(struct vcpu *v)
> >>      tasklet_kill(&vlapic->init_sipi.tasklet);
> >>      TRACE_0D(TRC_HVM_EMUL_LAPIC_STOP_TIMER);
> >>      destroy_periodic_time(&vlapic->pt);
> >> -    unmap_domain_page_global(vlapic->regs);
> >> -    free_domheap_page(vlapic->regs_page);
> >> +    UNMAP_DOMAIN_PAGE_GLOBAL(vlapic->regs);
> > I think you need to check whether vlapic->regs_page is NULL here...
> >
> >> +    FREE_DOMHEAP_PAGE(vlapic->regs_page);
> >>  }
> >>  
> >>  /*
> >> diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
> >> index a182d33b67..0cb7f2aad3 100644
> >> --- a/xen/include/xen/domain_page.h
> >> +++ b/xen/include/xen/domain_page.h
> >> @@ -77,4 +77,9 @@ static inline void unmap_domain_page_global(const void 
> >> *va) {};
> >>      (p) = NULL;                     \
> >>  } while ( false )
> >>  
> >> +#define UNMAP_DOMAIN_PAGE_GLOBAL(p) do {   \
> >> +    unmap_domain_page_global(p);           \
> >> +    (p) = NULL;                            \
> >> +} while ( false )
> >> +
> >>  #endif /* __XEN_DOMAIN_PAGE_H__ */
> >> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> >> index 667f9dac83..c274e2eac4 100644
> >> --- a/xen/include/xen/mm.h
> >> +++ b/xen/include/xen/mm.h
> >> @@ -85,6 +85,12 @@ bool scrub_free_pages(void);
> >>  } while ( false )
> >>  #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
> >>  
> >> +#define FREE_DOMHEAP_PAGES(p, o) do { \
> >> +    free_domheap_pages(p, o);         \
> > ...as both unmap_domain_page_global and free_domheap_pages don't
> > support being passed a NULL pointer.
> >
> > Passing such NULL pointer will result in unmap_domain_page_global
> > hitting an assert AFAICT, and free_domheap_pages will try to use
> > page_get_owner(NULL).
> 
> Urgh - very good points.
> 
> Do we perhaps want to take the opportunity to make these functions
> tolerate NULL, to simplify all cleanup code across the hypervisor?

Yes please, I prefer that rather than open coding the check in
FREE_DOMHEAP_PAGES/UNMAP_DOMAIN_PAGE_GLOBAL (or the callers).

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.