[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/4] xen/dmalloc: Introduce dmalloc() APIs


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 13 Jan 2021 23:16:24 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oIDdBM7VK/oLcUR/++D43bAjEPgFMb0Ef+tcxnQoj3o=; b=e5o7N9/ZA8gD+tC0v1AGND9mV3snI34b6fnN4XWG2FDlOwcR0V9E5GO/7bQZnQwgWHbBCTa29r/5mxTDhUxHMdhBHYuptz8Mse6O/fl+2SUUe24x4DncAIfxYnaUHWEXBzRF5RMx+90zcBaGrusdM8qGXiXm8zmD8STVWyaJia+mfKD3Z2CRs5iGHTUe3h/UbhvIPmXAb7brRvR/HYoN8W1z+TSOoT8lwmV3hysmsxI8mzdzD6s5DxjGf1kyRb4qTUxuNUNDW+JCBt+1jcPPPUc4UGQzU+OcYtpeq1w3uQNPIt08gwBSmxbmRF+asqNI94vpmnQgz0vD3WR5okS/HQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ccJiMohUiUrfr9Ipy4aDSn6NY2AwTgQk4P2ow9wEcYchc1Z2RTshHtCXD627Yukng2u4wFMgK/izSslDoVN6puTNKtBTCsvMgPV9VPzCa8lEDFKXseRUO897DcKMHuxC1V/pWI5FVVBjPOyVrfAW8C7jdl4ikWcsFIps9j0xw3kbzwSiq0RzT5KSIiq1pvfvnfxbCzFFn1PuIWBwYo+pz3D/m6cnruconpmVd6HYOdJgyC0FGNr/vDT2iUOJYh7f39Qz/t0f+yP7wWhQ0DwXOUUN486UI9MhYNBdX8pGYY3h51Xz4E8XRY5os/SmiPrrRoW8x7pUTnx3G8Lv8Mr5+Q==
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, "Tamas K Lengyel" <tamas@xxxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 13 Jan 2021 23:16:55 +0000
  • Ironport-sdr: BZuDTOileh1H3/myOyqEGD/mEWjPUkpx2yh/xOWa4V1GjzPK2gRSgJxWlnWPceP0QvBQvw4KLc 8j87QseRh8k6SyRU8bi4uEEcdRg6azTibweKLjDa7ivWdl/KDJ5WH15OLuxKiS9FULwO/d8mMk PjbcdKlepxWBHFdw/t6qy1i47hFcpLUUjJIvHQQ/u+lAwCiCrd4J5SNPfkWqzVxHNotrwcThQ1 3gtujvKkGPY2jDbNbP6hOfjMMByXiuPq6XmgK/lDMnWUNpHW2vXc0DbclIyYUdP1ZnTDPNnWZ9 nNg=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 05/01/2021 15:56, Jan Beulich wrote:
> On 23.12.2020 17:34, Andrew Cooper wrote:
>> RFC:
>>  * This probably wants to be less fatal in release builds
> I'm not even convinced this wants to be a panic() in debug builds.

Any memory leak spotted by this is an XSA, except in the narrow case of
being due exclusively to a weird and non-default order of hypercalls.

It absolutely needs to be something fatal in debug builds, for avoid
going unnoticed by testing.  Patch 4 is my proposed solution for this,
which will hopefully prevent bugs from escaping staging.

For release builds, a real memory leak is less bad behaviour than taking
the host down, but it certainly shouldn't shouldn't go unnoticed.

>>  * In an ideal world, we'd also want to count the total number of bytes
>>    allocated from the xmalloc heap, which would be interesting to print in 
>> the
>>    'q' debugkey.  However, that data is fairly invasive to obtain.
> Unless we used an xmem_pool rather than the new interface being
> a thin layer around xmalloc(). (This would then also provide
> better locality of the allocations, i.e. different domains
> wouldn't share allocations from the same page.)

I'm not sure if using a separate memory pool is a clear cut improvement.

For an attacker which has a corruption primitive, a single shared pool
will reduce the chance of the exploit being stable across different
systems.  Improved locality of allocations is an advantage from the
attackers point of view, but the proper way to combat that is with a
real randomised heap allocator.

Sharing within the same page isn't an issue, so long as we respect a
cache line minimum allocation size.

More importantly however, until we know exactly how much memory we're
talking about here, switching to a per-domain pool might be a massive
waste.  The xmalloc() allocations are in the noise compared to RAM, and
might only be a small multiple of the pool metadata to begin with.

> And even without doing so, adding a function to retrieve the actual size
> shouldn't be all that difficult - internally xmalloc_tlsf.c
> knows the size, after all, for e.g. xrealloc() to work right.

Yeah - the internals of the pool can calculate this.  I was considering
doing just this, but wasn't sure how exposing an API for this would go down.

For maximum flexibility, it would be my preferred way forward.

>> --- /dev/null
>> +++ b/xen/include/xen/dmalloc.h
>> @@ -0,0 +1,29 @@
>> +#ifndef XEN_DMALLOC_H
>> +#define XEN_DMALLOC_H
>> +
>> +#include <xen/types.h>
>> +
>> +struct domain;
>> +
>> +#define dzalloc_array(d, _type, _num)                                   \
> While I realize I'll get bashed again, the inconsistency of using
> (or not) leading underscores is too odd to not comment upon. I
> don't see what use they are here, irrespective of my general view
> on the topic.
>
>> +    ((_type *)_dzalloc_array(d, sizeof(_type), __alignof__(_type), _num))
>> +
>> +
>> +void dfree(struct domain *d, void *ptr);
> May I ask to avoid double blank lines?

Both of these were just copied from xmalloc.h without any real thought. 
What I did forget was to plaster this series with RFC.

>> +#define DFREE(d, p)                             \
>> +    do {                                        \
>> +        dfree(d, p);                            \
>> +        (p) = NULL;                             \
>> +    } while ( 0 )
>> +
>> +
>> +void *_dzalloc(struct domain *d, size_t size, size_t align);
>> +
>> +static inline void *_dzalloc_array(struct domain *d, size_t size,
>> +                                   size_t align, size_t num)
>> +{
>> +    return _dzalloc(d, size * num, align);
> No protection at all against the multiplication overflowing?

Well - xmalloc doesn't have any either.  But yes - both want some
compiler-aided overflow detection, rather than messing around with
arbitrary limits pretending to be an overflow check.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.