[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH v2] Enable SMEP CPU feature support for XEN hypervisor


  • To: "Li, Xin" <xin.li@xxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Fri, 03 Jun 2011 20:22:20 +0100
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 03 Jun 2011 12:23:19 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type; b=Rebc24AuAOj2GFjWbioooN8TPLvb1UYw/77DqJyvDmrCp7vtADwkC2yskfT0oEFA3C XqiAeQhzYXfyy1OnKATBLgFgfXYxBekHuDHN2MI8NmQBFh080NoIc0kHXUSGbBe1g/46 F8FzLGiYruaMYHIQly+33pglHQOJQGpadyHGA=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acwh9jTkNLQTp7L0Sd25RhWDyT37cgAHL4yJAACWZxAAA5B/eQ==
  • Thread-topic: [PATCH v2] Enable SMEP CPU feature support for XEN hypervisor

Hi Xen,

New patch attached and comments in-line.

On 03/06/2011 19:49, "Li, Xin" <xin.li@xxxxxxxxx> wrote:

>> 1. The initialisation in cpu/common.c is misguided. Features like
>> set_in_cr4(), setup_clear_cpu_cap(), and cpu_has_xxx, are designed to allow
>> the feature initialisation to happen close in code to where the feature is
>> used. Hence I have moved the initialisation into traps.c:trap_init().
> 
> It's the right way to move.
> 
> But trap_init() is called before Xen does leaf 7.0 detection, thus we also
> need to add leaf 7.0 detection in early_cpu_detect to get
> boot_cpu_data.x86_capability[7] initialized before trap_init().

Oooo good point. Fixed in the attached patch by moving SMEP setup into
setup.c, explicitly and immediately after identify_cpu(). I was actually in
two minds whether to fix this by extending early_cpu_detect(). Overall I
decided I don't want to grow e_c_d() if not absolutely necessary. If the CPU
setup stuff in setup.c grows too big and ugly I'd rather refactor it another
way.

>> 3. I have pushed interpretation of the pf_type enumeration out to the
>> callers of spurious_page_fault(). This is because a SMEP fault while Xen is
>> executing should *crash Xen*. So that's what I do. Further, when it is a
> 
> I'm still wondering is it overkill to kill Xen?  An evil guest can crash Xen?

An evil guest has to penetrate Xen before it can crash it in this way. If
Xen has been subverted, and is lucky enough to notice, what should it do?
The only sane answer is to shoot itself in the head. This kind of issue
would require immediate developer attention to fix whatever Xen bug had been
exploited to trigger SMEP.

> 32bit pv guest should be able to make use of SMEP.  When it is from Xen,
> we crash Xen.  When it's is from guest kernel executing user code, we
> can inject to guest to let it kill the current process.  Of course such cases
> the guest should be able to do SMEP handling.

Haha, give over on this idea that unexplainable behaviour should make you
only crash the process/guest. If your behaviour is unexplainable, and you
have pretensions of security, you fail-stop.

> We can't consistently handle it for 64bit and 32bit guest.

Well yeah, but that ignores my actual question, which was...
"""I wonder whether SMEP should be enabled only for guests (even PV guests)
which detect it via CPUID and proactively enable it via their virtualised
CR4? I mean, it is off in real hardware by default for a reason: backward
compatibility. Furthermore, we only detect spurious page faults for buggy
old PV guests, the rest will get the SMEP fault punted up to them, which
seems odd if they don't understand SMEP."""

I mean, I know we may as well just hide the feature from PV 64b guests
totally. That's obvious. Let's stop talking about PV 64b guests already! The
question is: what to do about PV 32b guests?

 -- Keir

> Thanks!
> -Xin

Attachment: 00-xen-smep
Description: Binary data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.