[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: Ensure the vCPU context is seen before clearing the _VPF_down



On Tue, 20 Apr 2021, Julien Grall wrote:
> (+ Andrew and Jan)
> 
> Hi Stefano,
> 
> On 20/04/2021 20:38, Stefano Stabellini wrote:
> > On Fri, 16 Apr 2021, Julien Grall wrote:
> > > > I think your explanation is correct. However, don't we need a smp_rmb()
> > > > barrier after reading v->is_initialised in xen/common/domctl.c:do_domctl
> > > > ? That would be the barrier that pairs with smp_wmb in regards to
> > > > v->is_initialised.
> > > 
> > > There is already a smp_mb() in sync_vcpu_exec_state() which is called from
> > > vcpu_pause() -> vcpu_sleep_sync().
> > 
> > But that's too late, isn't? The v->is_initialized check is done before
> > the vcpu_pause() call. We might end up taking the wrong code path:
> > 
> > https://gitlab.com/xen-project/xen/-/blob/staging/xen/common/domctl.c#L587
> > https://gitlab.com/xen-project/xen/-/blob/staging/xen/common/domctl.c#L598
> 
> I am a bit confused what you mean by "wrong path" here. There is no timing
> guarantee with a memory barrier. What the barrier will guarantee you is
> v->is_initialized will be read *before* anything after the barrier.
> 
> Are you worried that some variables in vcpu_pause() may be read before
> v->is_initialized?
> 
> > 
> > > I don't think we can ever remove the memory barrier in
> > > sync_vcpu_exec_state()
> > > because the vCPU paused may have run (or initialized) on a different pCPU.
> > > So
> > > I would like to rely on the barrier rather than adding an extra one (even
> > > thought it is not a fast path).
> > > 
> > > I am thinking to add a comment on top of vcpu_pause() to clarify that
> > > after
> > > the call, the vCPU context will be observed without extra synchronization
> > > required.
> > 
> > Given that an if.. goto is involved, even if both code paths lead to
> > good results, I think it would be best to have the smp_rmb() barrier
> > above after the first v->is_initialised read to make sure we take the
> > right path.
> 
> I don't understand what you are referring by "wrong" and "right" path. The
> processor will always execute/commit the path based on the value of
> v->is_initialized. What may happen is the processor may start to speculate the
> wrong path and then backtrack if it discovers the value is not the expected
> one. But, AFAIK, smp_rmb() is not going to protect you against speculation...
> smp_rmb() is only going to enforce a memory ordering between read.
> 
> > Instead of having to make sure that even the "wrong" path
> > behaves correctly.
> Just for clarification, I think you meant writing the following code:
> 
> if ( !v->is_initialized )
>   goto getvcpucontext_out;
> 
> smp_rmb();

No, sorry, I'll try to be clearer, see below


> ...
> 
> vcpu_pause();
> 
> AFAICT, there is nothing in the implementation of XEN_DOMCTL_getvcpucontext
> that justify the extra barrier (assuming we consider vcpu_pause() as a full
> memory barrier).
> 
> From your e-mail, I also could not infer what is your exact concern regarding
> the memory ordering. If you have something in mind, then would you mind to
> provide a sketch showing what could go wrong?

Let me start with what I had in mind:

initialized = v->is_initialized;
smp_rmb();
if ( !initialized )
  goto getvcpucontext_out;


Without smp_rmb there are no guarantees that we see an up-to-date value
of v->is_initialized, right? This READ of v->is_initialized is supposed
to pair with the WRITE in arch_set_info_guest.

Relying on the the barrier in vcpu_pause is OK only if is_initialised
was already read as "1".

I think it might be OK in this case, because probably nothing wrong
happens if we read the old value of v->is_initialized as "0" but it
doesn't seem to be a good idea. Theoretically it could pass a very long
time before we see v->is_initialized as "1" without the smp_rmb.

Please let me know if I am missing something.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.