[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] further post-Meltdown-bad-aid performance thoughts



On 01/19/2018 02:37 PM, Jan Beulich wrote:
> All,
> 
> along the lines of the relatively easy first step submitted yesterday,
> I've had some further thoughts in that direction. A fundamental
> thing for this is of course to first of all establish what kind of
> information we consider safe to expose (in the long run) to guests.
> 
> The current state of things is deemed incomplete, yet despite my
> earlier inquiries I haven't heard back any concrete example of
> information, exposure of which does any harm. While it seems to be
> generally believed that large parts of the Xen image should not be
> exposed, it's not all that clear to me why that would be. I could
> agree with better hiding writable data parts of it, just to be on the
> safe side (I'm unaware of statically allocated data though which
> might carry any secrets), but what would be the point of hiding
> code and r/o data? Anyone wanting to know their contents can
> simply obtain the Xen binary for their platform.

This tails into a discussion I think we should have about dealing with
SP1, and also future-proofing against future speculative execution attacks.

Right now there are "windows" through which people can look using SP1-3,
which we are trying to close.  SP1's "window" is the guest -> hypervisor
virtual address space (hence XPTI, separating the address spaces).
SP2's "window" is branch-target-poisoned gadgets (hence using retpoline
and various techniques to prevent branch target poisoning).  SP1's
"window" is array boundary privilege checks, hence Linux's attempts to
prevent speculation over privilege checks by using lfence or other
tricks[1].

But there will surely be more attacks like this (in fact, there may
already be some in the works[2]).

So what if instead of trying to close the "windows", we made it so that
there was nothing through the windows to see?  If no matter what the
hypervisor speculatively executed, nothing sensitive was visibile except
what a vcpu was already allowed to see,

At a first cut, there are two kinds of data inside the hypervisor which
might be interesting to an attacker:

1) Guest data: private encryption keys, secret data, &c
 1a. Direct copies of guest data
 1b. Data from which an attacker can infer guest data

2) Hypervisor data that makes it easier to perform other exploits.  For
instance, the layout of memory, the exact address of certain dynamic
data structures,  &c.

Personally I don't think we should worry too much about #2.  The main
thing we should be focusing on is 1a and 1b.

Another potential consideration is information about what monitoring
tools might be deployed against the attacker; an attacker might act
differently if she knew that VMI was being used than otherwise.  But I
doubt that the presence of VMI is really going to be able to be kept
secret very well; if I had a choice between obfuscating VMI and
recovering performance lost to SP* mitigations, I think I'd go for
performance.

> The reason I bring this up is because further steps in the direction
> of recovering performance would likely require as a prerequisite
> exposure of further data, first and foremost struct vcpu and
> struct domain for the currently active vCPU. Once again I'm not
> aware of any secrets living there. Another item might need to be
> the local CPU's per-CPU data.

A quick glance through struct vcpu doesn't turn up anything obvious.  If
we were worried about RowHammer, exposing the MFNs of various values
might be worth hiding.

Maybe it would be better to start "whitelisting" state that was believed
to be safe, rather than blacklisting state known to be dangerous.

On the whole I agree with Jan's approach, to start exposing, for
performance reasons, bits of state we believe to be safe, and then deal
with attacks as they come up.

 -George

[1] https://lwn.net/SubscriberLink/744287/02dd9bc503409ca3/
[2] skyfallattack.com



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.