[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Ping: [PATCH v2] x86emul: de-duplicate scatters to the same linear address


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 28 Jun 2021 14:13:44 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QEQ9PcYqF9WvWC4zp6yy+DyXR04ehCHa/bXmEXm0750=; b=AesZUVpOfFt8N5+B65Dx3O2O1xiXWyIQ22idyKn20tja6NMXjMXLSrKPCs5VMx4Wz3U8XsBZru8IsfUML3Tr3W/cvw9++6pOTAYwqSMzJJSnk4+dPPpobNsx4VFvKhOR+as4gQPuCmIy83PusyftiLQjA9twCAL41k+VTBY1TME6TcPwkS15Tnb/U2QOd05SUogoL1UKsCTqU2oAcG3hu/pSEDPHqHBjjcVghzZY0lo4qJNAuI4re66aryVoTbitmDRf0XL/pTmT/4pNDfI2DHy5WaToohh2M4vVC8mpkYkni4gd1d6jr5BhPUlKFLynbokjwzy/NFOI/csLNpZyGg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=i5Tlf/nIEZac+eyhzBo7uFW8WwAcNXOL6Bf97KVklDJEq7kjzWiTN2NQ4+jf0tkOHfd096vkLdxKfbIaVh8x6yj2XJn0oX9C3F09DI7xVC8p3PE0A7iIxNFQrY9YabcmzRKjrzCf+WVYCJD/GzPFybpzJHcC12LpRcCNBDqwhCRPxCW2jyXirqm2FNUoYGCAB1cub49iBdzlL569SztDg7BG/jo69035rj4u0UjGAvehybKCdC9eOaKxdYVftaVvdVD6Hk04JFYVbZEFSsX/iDu42gk0pe3d2W2cIcZN3llNqesSMDnJyuFGK9rCX8uGKrSHlbLhTUG24uBRjpUqIg==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 28 Jun 2021 12:13:53 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 20.05.2021 15:34, Jan Beulich wrote:
> The SDM specifically allows for earlier writes to fully overlapping
> ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
> would crash it if varying data was written to the same address. Detect
> overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
> be quite a bit more difficult. To maintain proper faulting behavior,
> instead of dropping earlier write instances of fully overlapping slots
> altogether, write the data of the final of these slots multiple times.
> (We also can't pull ahead the [single] write of the data of the last of
> the slots, clearing all involved slots' op_mask bits together, as this
> would yield incorrect results if there were intervening partially
> overlapping ones.)
> 
> Note that due to cache slot use being linear address based, there's no
> similar issue with multiple writes to the same physical address (mapped
> through different linear addresses).
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

As indicated before, this is an issue which - afaict - would be a
security issue if introspection was security supported. As such I
find it highly irritating that this has now been pending for well
over half a year (counting from the submission of v1).

Jan

> ---
> v2: Maintain correct faulting behavior.
> 
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -10073,15 +10073,36 @@ x86_emulate(
>  
>          for ( i = 0; op_mask; ++i )
>          {
> -            long idx = b & 1 ? index.qw[i] : index.dw[i];
> +            long idx = (b & 1 ? index.qw[i]
> +                              : index.dw[i]) * (1 << state->sib_scale);
> +            unsigned long offs = truncate_ea(ea.mem.off + idx);
> +            unsigned int j, slot;
>  
>              if ( !(op_mask & (1 << i)) )
>                  continue;
>  
> -            rc = ops->write(ea.mem.seg,
> -                            truncate_ea(ea.mem.off +
> -                                        idx * (1 << state->sib_scale)),
> -                            (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
> +            /*
> +             * hvmemul_linear_mmio_access() will find a cache slot based on
> +             * linear address.  hvmemul_phys_mmio_access() will crash the
> +             * domain if observing varying data getting written to the same
> +             * cache slot.  Utilize that squashing earlier writes to fully
> +             * overlapping addresses is permitted by the spec.  We can't,
> +             * however, drop the writes altogether, to maintain correct
> +             * faulting behavior.  Instead write the data from the last of
> +             * the fully overlapping slots multiple times.
> +             */
> +            for ( j = (slot = i) + 1; j < n; ++j )
> +            {
> +                long idx2 = (b & 1 ? index.qw[j]
> +                                   : index.dw[j]) * (1 << state->sib_scale);
> +
> +                if ( (op_mask & (1 << j)) &&
> +                     truncate_ea(ea.mem.off + idx2) == offs )
> +                    slot = j;
> +            }
> +
> +            rc = ops->write(ea.mem.seg, offs,
> +                            (void *)mmvalp + slot * op_bytes, op_bytes, 
> ctxt);
>              if ( rc != X86EMUL_OKAY )
>              {
>                  /* See comment in gather emulation. */
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.