[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 15/17] x86/ctxt: Issue a speculation barrier between vcpu contexts


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxx>
  • From: David Woodhouse <dwmw2@xxxxxxxxxxxxx>
  • Date: Mon, 15 Jan 2018 14:23:19 +0100
  • Delivery-date: Mon, 15 Jan 2018 13:23:43 +0000
  • Face: iVBORw0KGgoAAAANSUhEUgAAADAAAAAwBAMAAAClLOS0AAAAG1BMVEUHBwcUFBQpKSlGRkZhYWF9fX2Xl5eysrLMzMxFF+rXAAACaElEQVQ4y21UQXbbIBQE9wJALmAg6ToWON22FrhZthHgbvssUPathC7QWMful2JHSmtWwGg+zPxBCE0DU4QoJQgRgsg4w2gJjBNE8PjFBZgnQMBs+uZ1NQNQjZO3BV4AGDFC0f+l4DBG0VUAM4yv7SO8IgRdHXQ+A78HKL5OAeCfNQV5cHX8DsBUyIJKtYbt98BKaGNCKjfgFVkqYVLbkHKsRsbSCSa0T6npIqLrpRBgQKHUpQmgs9eEKaiUcooE8WWfCGVnBiUcn1uF2XhbfmN9apKnmMP2K4kizKkQWxuaVNOpU2cACIyxO1Po8ETHcXEDMVnozcejkAYA9iaD4pU0ZvNQ8VurNnTuFAYVtuIPUZW25PjDIjQAlGyffIiRQxoWAZBmJ0LTdW2Nyc0iP3DqRhxizvGJkBWZmyFVyZkddWzmBoIBVMpCCJ1CFzl98xav4VJKSSD45KbUT75ixikTphDSRh8+Uz7JLgUTAgAFwzqzjxc/nDY7WUApqY0OMdTwCKZSXplSKkgIRCHElCp8ZnhnKqXuwcNbk1L0VXE+I9alUXoHlLHl3mv7/dWQlJwtjREC7mu9L/U2jQyMUuO2EDS4q9Kl2ddm232bxIE5pjJuVwiljNn/Cfv25/T0cu5cZbwHGVq7h/zp0B4n3S99V/utD+Uo8BiGx9xCsOAV5z7/tjo4Z4z1Lvb90KZ7eFOoOeXOukqF2seo234YYuaQPpRP+cVZU5adT1Edun5Iz3z8fTz3+eSDh0Ip1c7zx1MaijGzTd/3MbRuBHz8cvcVgCMBRpOHvgu59WDhoat+nIZm+LWm9C/aaaGq5DCP9QAAAABJRU5ErkJggg==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, 2018-01-15 at 13:02 +0000, Andrew Cooper wrote:
> On 15/01/18 12:54, David Woodhouse wrote:
> > 
> > On Fri, 2018-01-12 at 18:01 +0000, Andrew Cooper wrote:
> > > 
> > > @@ -1736,6 +1736,9 @@ void context_switch(struct vcpu *prev, struct
> > > vcpu *next)
> > >          }
> > >  
> > >          ctxt_switch_levelling(next);
> > > +
> > > +        if ( opt_ibpb )
> > > +            wrmsrl(MSR_PRED_CMD, PRED_CMD_IBPB);
> > >      }
> > > 
> > If you're doing that without an 'else lfence' don't you need to include
> > a comment with your proof that it's safe to do so, and the CPU can't
> > speculate a taken conditional branch and all the way to a problematic
> > instruction?
>
> What would that gain?  A malicious guest could speculate around it, but
> speculation will catch up (at the very latest) when we return to guest,
> which is a serialising event.

There's your proof. I'd just be happier with a blanket policy of
*including* that proof in all cases where we do this kind of runtime
conditional branch around setting IBRS or IBPB. Because if we're in the
habit of doing the 'if (foo) wrmsrl()' without it, we *might* miss a
case where it's not actually OK.

(Aside: Is VMLAUNCH actually architecturally guaranteed to be
serialising? That doesn't seem consistent with a long-term goal of
designing hardware to make VMs go faster. Or have we merely extracted a
promise from Intel that *current* hardware will stop speculative
execution when it hits a VMLAUNCH?)
 
> > 
> > Also... if you're doing that in context_switch() does it do the right
> > thing with idle? If a CPU switches to the idle domain and then back
> > again to the same vCPU, does that do the IBPB twice?
>
> Context switches to idle will skip the IBPB because it isn't needed, but
> any switch to non-idle need it.  In your example, we should execute just
> a single IBPB.

In my example I think we should not execute IBPB at all. We come from a
given VMCS, sleep for a while, and go back to it. No need for any
flushing whatsoever.

> > 
> > For vmx we only really need IBPB when we do VMPTRLD, right?
>
> That is part of IBRS_ATT is it not?

It doesn't go away with IBRS_ALL (as it's now called), but it's the
same for the existing IBRS *and* retpoline. Doing it on VMPTRLD is what
Linux is doing.

(cf. https://lkml.org/lkml/2018/1/13/40 and
     https://lkml.org/lkml/2018/1/15/146 and note the AMD SVM caveat.)


Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.