[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] xen/x86: fix PV trap handling on secondary processors


  • To: Juergen Gross <jgross@xxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 16 Sep 2021 17:04:02 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=Q6WgI46VbGAw65/T84e6H5PFsEF1GgiBtZusySz5miE=; b=n9dGAdavGklA1FWLKWcnyhZFvEcicaePh3hWIxRGkyrIaBgADiGid0L5XtpznpCceppC8HfetT3KmpNoiVxXh/pnbTkZaNME+MqGHgMScOEBhn18B0lmv8rXb54vOjblXG++/7GspUFuYKrWgOl/f6VgEyVoXRHnox76PVAqAOUywR1HXBoOiuRguZr0BWhIKDVgLyUiqKQYfdDEa5/ZdlKurMqnLtP0yCU4tWfThxyc5ozP1LxLE0/9ZDlG/jCsR2KxBeCLaXdr76Wfd7Lg77MlhS3za9b+WvTrcwi4CZa65uSg9IIFeTYN/URdDwYF/naXlfdyoJ8SV+VU5ngZ5A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QNpbuhjPGqHbplYflYUAuqsphOBVrXRjWVcO59eVLqIx4q/xFFv7f9otMGiGsi5CHnKE97BHZcZlER5ICcm1XqDc/VnDOKpbG6t6yQgfWx6P4gvOcTU16sZfJ16oPybkbnxRfMo97YE0i1bmTMZFfVO07Ecp371ocFCFBnhxO/7b78NeBsLrvUu7stJY++h88oAtetboDZ+JEDhhw7v9nwh6VqIhtbevJ1HatDd1z+aNQhk2wwxryhFkCwaBPxPYajRxeLFXi/HoAUVp/nz0eaz8WhrXf0jQJZjB0VNhQz/hg96A++uip+Dg1xZXWFSm4JE9R85+MvcxEKwLOG1yNQ==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, lkml <linux-kernel@xxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 16 Sep 2021 15:04:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

The initial observation was that in PV mode under Xen 32-bit user space
didn't work anymore. Attempts of system calls ended in #GP(0x402). All
of the sudden the vector 0x80 handler was not in place anymore. As it
turns out up to 5.13 redundant initialization did occur: Once from
cpu_initialize_context() (through its VCPUOP_initialise hypercall) and a
2nd time while each CPU was brought fully up. This 2nd initialization is
now gone, uncovering that the 1st one was flawed: Unlike for the
set_trap_table hypercall, a full virtual IDT needs to be specified here;
the "vector" fields of the individual entries are of no interest. With
many (kernel) IDT entries still(?) (i.e. at that point at least) empty,
the syscall vector 0x80 ended up in slot 0x20 of the virtual IDT, thus
becoming the domain's handler for vector 0x20.

Since xen_copy_trap_info() has just this single purpose, simply adjust
that function. xen_convert_trap_info() cannot be used here. Its use
would also have lead to a buffer overrun if all (kernel) IDT entries
were populated, due to the function setting a sentinel entry at the end.

(I didn't bother trying to identify the commit which uncovered the issue
in 5.14; the commit named below is the one which actually introduced the
bad code.)

Fixes: f87e4cac4f4e ("xen: SMP guest support")
Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
In how far it is correct to use the current CPU's IDT is unclear to me.
Looks at least like another latent trap.

--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -775,8 +775,15 @@ static void xen_convert_trap_info(const
 void xen_copy_trap_info(struct trap_info *traps)
 {
        const struct desc_ptr *desc = this_cpu_ptr(&idt_desc);
+       unsigned i, count = (desc->size + 1) / sizeof(gate_desc);
 
-       xen_convert_trap_info(desc, traps);
+       BUG_ON(count > 256);
+
+       for (i = 0; i < count; ++i) {
+               const gate_desc *entry = (gate_desc *)desc->address + i;
+
+               cvt_gate_to_trap(i, entry, &traps[i]);
+       }
 }
 
 /* Load a new IDT into Xen.  In principle this can be per-CPU, so we




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.