[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] xen/memory: Introduce a hypercall to provide unallocated space


  • To: Julien Grall <julien@xxxxxxx>, Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 28 Jul 2021 20:00:51 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/aDyO0z9TsrqBwOMoovjkHxyClB+bWGt4GlD8AKwWPU=; b=gkKFWo6itcEWG2e99X+m+ubMTFw2l1IhGKazW65f5xqcDIDgIEmSWsikrErlMI1XB2+XpL3FryjlemkkvaJliew5LG/25CwUpjYovcim1T+wd9uHuPUxbqkYFsAMuoNQSMNn06s+5O+iZoLi5O93gw/ScYwt3ofYbGJ+7qsscUcVFpg/kHyTuk5nhkfqDV9pmKgxiYexGDWKV8oEMcTiqvwnvLFQz1TtOaUwVZWOhaE+FthsxJEnBmfn2oslVODeKPaJtarZT5dX3j/c0kR0vq1DCz0M+wm4/wokg6XWkfmre4qjL6uZ4DqU1H8XjW8C8BKfnmqvkLHvjQrqOTIiuw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NFC0U7zWXQ76bjpPvzMQZcGvIN/Z71u4GIggUnYt5B3uxArRL47gC78fWyI8vphXHsllWBpXqPtYGoLUhg7duaV6gnWnJBFYV+XnNYnqVLHm5ZVSURniCoPVvlL031cfjbZwMw1Zk5llPA+A3J0E5hAj6//c09wbTlrvUqP8e5w0X3AZXUC+rJUNe/3x0R+cRFtgOIxKkaZojKjI5qwJs0qoYDvWCid/ncb5Uj1+YvyKKbGM+Rze2+8xzfp8ZzEwW7e8hPGm/HtALxw8mx9YVBpnuAFiRQi/vGv39dimAmjQecdxDa8hQlBtCFg96zW5b92dHr5Y3cNF1nJCIDS07Q==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>, "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, "Stefano Stabellini" <sstabellini@xxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>
  • Delivery-date: Wed, 28 Jul 2021 19:01:18 +0000
  • Ironport-hdrordr: A9a23:41c5+6w67MCqLZTT7+I8KrPxrOskLtp133Aq2lEZdPULSKKlfp GV88jziyWZtN9wYhEdcdDpAtjlfZquz+8K3WB3B8bcYOCGghrVEGgG1+rfKlLbalbDH4JmpM Fdmu1FeaDN5DtB/LXHCWuDYq4dKbC8mcjC74qurAYOPHRXguNbnmBE426gYz1LrWJ9dOME/f Snl696TnabCA4qhpPRPAh1YwGPnayEqLvWJTo9QzI34giHij2lrJb8Dhijxx8bFxdC260r/2 TpmxHwovzLiYD69jbsk0voq7hGktrozdVOQOSKl8guMz3pziKlfp5oVbGutC085Muv9FEput /RpApIBbUz11rhOkWO5Tf90Qjp1zgjr1fk1F+jmHPm5ff0QTorYvAxyL5xQ1/80Q4Nrdt82K VE0yayrJxMFy7Nmyz7+pzhSwxqvlDcmwtmrccjy1hkFacOYr5YqoISuGlPFo0bIS784Ic7VM FzEcDn4upMe1/yVQGagoBW+q3qYp0PJGbBfqBb0fbligS+3UoJjHfw/fZv2kvpr/kGOsF5D4 2uCNUbqFlMJvVmJ56VSt1xGvdepwT2MFvx2VmpUCPa/Zc8SjnwQq7MkcEIDd6RCeo1JbsJ6d j8uQBjxCEPk3yHM7zH4HQMyGGWfFmA
  • Ironport-sdr: S7EeMT2aLjyxTlhq56JHsjVZPnoLQfcpVp4gF/M0QVk6ivhol2l3mBCmOmeWCumhYL585g//Ap EK8i+9mnTsudLDEAqQjhmxTaXZ8l9vGTMK93/nSODvN4Ic6XjbS5PDXysg/lUlcmQaWiQUyWh2 G+EmAsFplSn1WX3tJ1X+nquMLKdzXltfXTHUJQsBVlGq2PezHn8KIZWmDNPfv6XCPyxDxY3r4L Jwa/1Vr6o9GW+v3LvcT0sYNqCVw/UHJQPEtUvOStPxNkmisLAlGXKdWSViwy7JfrIIDfHsPCeq q9yT/hbcq3L/NSWZyRmUvQ3H
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 28/07/2021 18:27, Julien Grall wrote:
> Hi Andrew,
>
> On 28/07/2021 18:19, Andrew Cooper wrote:
>> On 28/07/2021 17:18, Oleksandr Tyshchenko wrote:
>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>>>
>>> Add XENMEM_get_unallocated_space hypercall which purpose is to
>>> query hypervisor to find regions of guest physical address space
>>> which are unused and can be used to create grant/foreign mappings
>>> instead of wasting real pages from the domain memory for
>>> establishing these mappings. The problem with the current Linux
>>> on Xen on Arm behaviour is if we want to map some guest memory
>>> regions in advance or to perform cache mappings in the backend
>>> we might run out of memory in the host (see XSA-300).
>>> This of course, depends on the both host and guest memory sizes.
>>>
>>> The "unallocated space" can't be figured out precisely by
>>> the domain on Arm without hypervisor involvement:
>>> - not all device I/O regions are known by the time domain starts
>>>    creating grant/foreign mappings
>>> - the Dom0 is not aware of memory regions used for the identity
>>>    mappings needed for the PV drivers to work
>>> In both cases we might end up re-using these regions by
>>> a mistake. So, the hypervisor which maintains the P2M for the domain
>>> is in the best position to provide "unallocated space".
>>
>> I'm afraid this does not improve the situation.
>>
>> If a guest follows the advice from XENMEM_get_unallocated_space, and
>> subsequently a new IO or identity region appears, everything will
>> explode, because the "safe area" wasn't actually safe.
>>
>> The safe range *must* be chosen by the toolstack, because nothing else
>> can do it safely or correctly.
>
> The problem is how do you size it? In particular, a backend may map
> multiple time the same page (for instance if the page is granted twice).

The number of mapped grants is limited by the size of the maptrack table
in Xen, which is a toolstack input to the domaincreate hypercall. 
Therefore, the amount of space required is known and bounded.

There are a handful of other frames required in the current ABI (shared
info, vcpu info, etc).

The areas where things do become fuzzy is things like foreign mappings,
acquire_resource, etc for the control domain, which are effectively
unbounded from the domain's point of view.

For those, its entirely fine to say "here 128G of safe mapping space" or
so.  Even the quantity of mapping dom0 can make is limited by the shadow
memory pool and the number of pagetables Xen is willing to expend on the
second stage translation tables.

>
>>
>> Once a safe range (or ranges) has been chosen, any subsequent action
>> which overlaps with the ranges must be rejected, as it will violate the
>> guarantees provided.
>>
>> Furthermore, the ranges should be made available to the guest via normal
>> memory map means.  On x86, this is via the E820 table, and on ARM I
>> presume the DTB.  There is no need for a new hypercall.
>
> Device-Tree only works if you have a guest using it. How about ACPI?

ACPI inherits E820 from x86 (its a trivial format), and UEFI was also
based on it.

But whichever...  All firmware interfaces have a memory map.

> To me the hypercall solution at least:
>   1) Avoid to have to define the region on every single firmware table

There is only ever one.

>   2) Allow to easily extend after the guest run

The safe ranges can't be changed (safely).  This is the same problem as
needing to know things like your PCI apertures ahead of time, or where
the DIMM hotplug regions are.

Having the guest physmap be actually dynamic is the cause of so many
bugs (inc security) and misfeatures in Xen.  Guests cannot and do no
cope with things being fully dynamic, because that's not how real
hardware works.  What you get is layers and layers of breakage on top of
each other, rather than a working system.

The size of mapping space is a limit, just like maxphysaddr, or the PCI
apertures, or MMCFG space, etc.  You can make it large by default (as it
doesn't consume resource when not being used), but any guest OS isn't
going to tolerate it morphing during runtime.

~Andrew




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.