[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: Meltdown band-aid against malicious 64-bit PV guests

On Fri, Jan 12, 2018 at 10:19 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> This is a very simplistic change limiting the amount of memory a running
> 64-bit PV guest has mapped (and hence available for attacking): Only the
> mappings of stack, IDT, and TSS are being cloned from the direct map
> into per-CPU page tables. Guest controlled parts of the page tables are
> being copied into those per-CPU page tables upon entry into the guest.
> Cross-vCPU synchronization of top level page table entry changes is
> being effected by forcing other active vCPU-s of the guest into the
> hypervisor.
> The change to context_switch() isn't strictly necessary, but there's no
> reason to keep switching page tables once a PV guest is being scheduled
> out.
> There is certainly much room for improvement, especially of performance,
> here - first and foremost suppressing all the negative effects on AMD
> systems. But in the interest of backportability (including to really old
> hypervisors, which may not even have alternative patching) any such is
> being left out here.
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

I did some quick tests of a Xen hypervisor build, comparing the
different options:  PVH guest, PV guest (unpatched), PV guest
(patched), and PV under Vixen (in HVM mode).  Same guest kernel (Linux
4.14), CentOS 6 host, guest with 2 vcpus and 512MiB or RAM.  This
should be a worst-case for overheads.

Quick results:
* PVH: 52s
* PV unmodified: 68s
* PV under Vixen: 90s
* PV with this patch: 93s

So at least in this particular case, this performance for this patch
is on par with the Vixen "pvshim" approach.  (Haven't tried with


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.