[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v9 01/11] xen/memory: Fix mapping grant tables with XENMEM_acquire_resource

  • To: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 4 Feb 2021 21:23:31 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VMv3LYQK/0CHBXpc5n3kLk7zi6yi7umBpDakIZDZya8=; b=OvPCLXVNFHGctWmoECrE5tEp/asAGKuzwfu7p+M9b1cIOjkxQaVEhk8Wtb5J2eXk2EfUVe1l0Bj9YplyLHCEIhrObzMDHRVmlYpknSHNyTzsMlh2/JveBDL5qiq70bQWo15npjgdb+LOHex8jAwFk7iNAnePt78m2lRVP2DyWAoY1YUx9mB/yCkVImMrNhvQOlWspCjw1Ov9QLPhW63PjhMBl/yh/ah82u5m37agk7DsGm/nwjQn2J9Khdv0k2SBu035pA8mx4PaEGLdXNBzBm7mq2z1JLuRZFDkT4pe41mtTa/0fZ6kjmKhymgCYDvqcFYyVb1Q1qby+Y+8OVypSQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oQiH8ImkkrAOPHslBF0GX9P1w5RbTiM1ZFAMYh52HIQ1rFe1QUQRLYWHOWx9HgmSTH6pn4+S38N/IIAzh/w69dDbDct+sbWnLSuW06Nkt1RBJ3CAi8weoiRR7cMmb6MxeDyLXWGEh+Mv6QKlC6eO+cUYdL4KfgpYK1r2DP4fveS9BQB61PYk3Bvc6BV+mbH7ye9CXDMoyzMSVzw0/R5lj1+sBJ9zE80o+C+dGtZtzatzf95cAdq1jkRITSvCLVTlHPBKd7bRhbEjA8sZgAwJ3oJc67i7VOSze3EO1DNqMw1h3bSCeuo9uYUwMOOaoHvi13XNDJNelwfchjgMPqDHFw==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Paul Durrant <paul@xxxxxxx>, Michał Leszczyński <michal.leszczynski@xxxxxxx>, Hubert Jasudowicz <hubert.jasudowicz@xxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 04 Feb 2021 21:23:51 +0000
  • Ironport-sdr: jaGDq4Xg0cAu1SqeIXRqTQOSGy8qqt/Ctuke8NqtZ+scDLh/DTlUmqiuPb2TEILbakeW34sxT7 hCTFB59qeeMn+sRpZ1nTryBmM2TooYX0QgZQgQvCzjvakgcC+7QZTHwrY/hR1xauWaeFYu8C0a ycbG38jKK93Wo+I0Mryb6XYPZOo5RyzwVMKvpuGIjKOnGqOTv0ptImhWQvfa5U0pHsqE3kWDO9 +tShPugQaNxkcIBz9hIEMORGqC9CKAZ+B6jNcOHJyeIx+RmlDFNJa1haF+OSXXsTeNzGrIcv73 t40=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 01/02/2021 23:26, Andrew Cooper wrote:
> A guest's default number of grant frames is 64, and XENMEM_acquire_resource
> will reject an attempt to map more than 32 frames.  This limit is caused by
> the size of mfn_list[] on the stack.
> Fix mapping of arbitrary size requests by looping over batches of 32 in
> acquire_resource(), and using hypercall continuations when necessary.
> To start with, break _acquire_resource() out of acquire_resource() to cope
> with type-specific dispatching, and update the return semantics to indicate
> the number of mfns returned.  Update gnttab_acquire_resource() and x86's
> arch_acquire_resource() to match these new semantics.
> Have do_memory_op() pass start_extent into acquire_resource() so it can pick
> up where it left off after a continuation, and loop over batches of 32 until
> all the work is done, or a continuation needs to occur.
> compat_memory_op() is a bit more complicated, because it also has to marshal
> frame_list in the XLAT buffer.  Have it account for continuation information
> itself and hide details from the upper layer, so it can marshal the buffer in
> chunks if necessary.
> With these fixes in place, it is now possible to map the whole grant table for
> a guest.
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> CC: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
> CC: Ian Jackson <iwj@xxxxxxxxxxxxxx>
> CC: Jan Beulich <JBeulich@xxxxxxxx>
> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> CC: Wei Liu <wl@xxxxxxx>
> CC: Julien Grall <julien@xxxxxxx>
> CC: Paul Durrant <paul@xxxxxxx>
> CC: Michał Leszczyński <michal.leszczynski@xxxxxxx>
> CC: Hubert Jasudowicz <hubert.jasudowicz@xxxxxxx>
> CC: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
> v9:
>  * Crash domain rather than returning late with -ERANGE/-EFAULT.
> v8:
>  * nat => cmp change in the start_extent check.
>  * Rebase over 'frame' and ARM/IOREQ series.
> v3:
>  * Spelling fixes
> ---
>  xen/common/compat/memory.c | 114 +++++++++++++++++++++++++++++++++--------
>  xen/common/grant_table.c   |   3 ++
>  xen/common/memory.c        | 124 
> +++++++++++++++++++++++++++++++++------------
>  3 files changed, 187 insertions(+), 54 deletions(-)

Attempt at release-ack paperwork.

This is a bugfix for an issue which doesn't manifest by in-tree default
callers, but does manifest when using the
xenforeignmemory_map_resource() interface in the expected manner.

The hypercall is made of a metadata structure, and an array of frames. 
The bug is that Xen only tolerates a maximum of 32 frames, and the
bugfix is to accept an arbitrary number of frames.

What can go wrong (other than the theoretical base case of everything,
seeing as we're talking about C in system context)?

The bugfix is basically "do { chunk_of_32(); } while ( !done );", so
we're adding in an extra loop into the hypervisor.  We could fail to
terminate the loop (possible livelock in the hypervisor), or we could
incorrectly marshal the buffer (guest kernel might receive junk instead
of the mapping they expected).

The majority of the complexity actually comes from the fact there are
two nested loops, one in the compat layer doing 32=>64 (and back)
marshalling, and one in the main layer, looping over chunks of 32
frames.  Therefore, the same risks apply at both layers.

I am certain the code is not bug free.  The compat layer here is
practically impossible to follow, and has (self inflicted) patterns
where we have to crash the guest rather than raise a clean failure, due
to an inability to unwind the fact that the upper layer decided to issue
a continuation.

There is also one bit where I literally had to give up, and put this
logic in:
> +            /*
> +             * Well... Somethings gone wrong with the two levels of chunking.
> +             * My condolences to whomever next has to debug this mess.
> +             */
> +            ASSERT_UNREACHABLE();
> +            domain_crash(current->domain);
> +            split = 0;
>              break;

Mitigations to these risks are thus:

* Explicit use of failsafe coding patterns, will break out of the loops
and pass -EINVAL back to the caller, or crashing the domain when we
can't figure out how to pass an error back safely.

* This codepath codepath gets used multiple times on every single VM
boot, so will get ample testing from the in-tree caller point of view,
as soon as OSSTest starts running.

* The IPT series (which discovered this mess to start with) shows that,
in addition to the in-tree paths working, the >32 frame mappings appear
to work correctly.

* An in-tree unit test exercising this codepath in a way which
demonstrates this bug.  Further work planned for this test.

* Some incredibly invasive Xen+XTF testing to prove the correctness of
the marshalling.  Not suitable for committing, but available for
inspection/query.  In particular, this covers aspects of the logic with
won't get any practical testing elsewhere.

Overall, if there are bugs, they're very likely to be spotted by OSSTest
in short order.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.