[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/sm{e, a}p: do not enable SMEP/SMAP in PV shim by default on AMD



On Mon, Jan 20, 2020 at 03:38:02PM +0100, Jan Beulich wrote:
> On 20.01.2020 15:07, Roger Pau Monné  wrote:
> > On Thu, Jan 16, 2020 at 04:00:03PM +0000, Igor Druzhinin wrote:
> >> Due to AMD and Hygon being unable to selectively trap CR4 bit modifications
> >> running 32-bit PV guest inside PV shim comes with significant performance
> >> hit. Moreover, for SMEP in particular every time CR4.SMEP changes on 
> >> context
> >> switch to/from 32-bit PV guest, it gets trapped by L0 Xen which then
> >> tries to perform global TLB invalidation for PV shim domain. This usually
> >> results in eventual hang of a PV shim with at least several vCPUs.
> >>
> >> Since the overall security risk is generally lower for shim Xen as it being
> >> there more of a defense-in-depth mechanism, choose to disable SMEP/SMAP in
> >> it by default on AMD and Hygon unless a user chose otherwise.
> >>
> >> Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
> >> ---
> >> I'm a little bit on the fence with this one. We're having the same issue 
> >> with
> >> general nested virt but I'm not inclined to trade security for a user in
> >> general case. Disabling it by default for PV shim only seems rather 
> >> inocuous
> >> due to the use case restricion. I'd like to hear more opinions.
> >> ---
> >>  docs/misc/xen-command-line.pandoc | 10 ++++++++--
> >>  xen/arch/x86/setup.c              | 20 ++++++++++++++------
> >>  2 files changed, 22 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/docs/misc/xen-command-line.pandoc 
> >> b/docs/misc/xen-command-line.pandoc
> >> index 981a5e2..05b2dde 100644
> >> --- a/docs/misc/xen-command-line.pandoc
> >> +++ b/docs/misc/xen-command-line.pandoc
> >> @@ -1936,19 +1936,25 @@ is 1MB.
> >>  ### smap (x86)
> >>  > `= <boolean> | hvm`
> >>  
> >> -> Default: `true`
> >> +> Default: `true` unless running in pv-shim mode on AMD or Hygon hardware
> >>  
> >>  Flag to enable Supervisor Mode Access Prevention
> >>  Use `smap=hvm` to allow SMAP use by HVM guests only.
> >>  
> >> +In PV shim mode on AMD or Hygon hardware due to significant perfomance 
> >> impact
> >> +in some cases and generally lower security risk the option defaults to 
> >> false.
> >> +
> >>  ### smep (x86)
> >>  > `= <boolean> | hvm`
> >>  
> >> -> Default: `true`
> >> +> Default: `true` unless running in pv-shim mode on AMD or Hygon hardware
> >>  
> >>  Flag to enable Supervisor Mode Execution Protection
> >>  Use `smep=hvm` to allow SMEP use by HVM guests only.
> >>  
> >> +In PV shim mode on AMD or Hygon hardware due to significant perfomance 
> >> impact
> >> +in some cases and generally lower security risk the option defaults to 
> >> false.
> >> +
> >>  ### smt (x86)
> >>  > `= <boolean>`
> >>  
> >> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> >> index 5bdc229..8432b77 100644
> >> --- a/xen/arch/x86/setup.c
> >> +++ b/xen/arch/x86/setup.c
> >> @@ -105,9 +105,9 @@ struct cpuinfo_x86 __read_mostly boot_cpu_data = { 0, 
> >> 0, 0, 0, -1 };
> >>  
> >>  unsigned long __read_mostly mmu_cr4_features = XEN_MINIMAL_CR4;
> >>  
> >> -/* smep: Enable/disable Supervisor Mode Execution Protection (default 
> >> on). */
> >> -#define SMEP_HVM_ONLY (-1)
> >> -static s8 __initdata opt_smep = 1;
> >> +/* smep: Enable/disable Supervisor Mode Execution Protection */
> >> +#define SMEP_HVM_ONLY (-2)
> >> +static s8 __initdata opt_smep = -1;
> > 
> > Could you change the type to int8_t instead of s8? (here and below,
> > can be done on commit with the changes requested by Jan).
> 
> Too late, sorry, this was committed before the weekend already.
> (I guess I should have noticed this myself, though.)

Oh, sorry, didn't realize this was already committed, never mind then.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.