[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [Patch] Enable SMEP CPU feature support for XEN itself

  • To: Keir Fraser <keir@xxxxxxx>, "Yang, Wei Y" <wei.y.yang@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Li, Xin" <xin.li@xxxxxxxxx>
  • Date: Thu, 2 Jun 2011 00:15:14 +0800
  • Accept-language: zh-CN, en-US
  • Acceptlanguage: zh-CN, en-US
  • Cc:
  • Delivery-date: Wed, 01 Jun 2011 09:20:42 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcwgVcdx+MEnPLpEQiCW7V8mG/kTcwAB1C2gAATM9UkAATxcMA==
  • Thread-topic: [Xen-devel] [Patch] Enable SMEP CPU feature support for XEN itself

> > This patch enables SMEP in Xen to protect Xen hypervisor from executing pv
> > guest code,
> Well not really. In the case that *Xen* execution triggers SMEP, you should
> crash.

You don't expect Xen can trigger SMEP? somehow I agree, but in case there is
any null pointer in Xen, an evil pv guest can easily get control of the system.

> > and kills a pv guest triggering SMEP fault.
> Should only occur when the guest kernel triggers the SMEP.

According to code base size, it's much easier for malicious applications to 
security holes in kernel.  But unluckily SMEP doesn't apply to the ring 3 where
x86_64 pv kernel runs on.  It's wiser to use HVM :)

> Basically you need to pull your check out of spurious_page_fault() and into
> the two callers, because their responses should differ (one crashes the
> guest, the other crashes the hypervisor).
> Please define an enumeration for the return codes from spurious_pf, rather
> than using magic numbers.

Will do.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.