[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/6] Pause/Unpause the domain before/after assigning PI hooks



>>> On 02.09.16 at 03:46, <feng.wu@xxxxxxxxx> wrote:

> 
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: Thursday, September 1, 2016 4:30 PM
>> To: Wu, Feng <feng.wu@xxxxxxxxx>
>> Cc: andrew.cooper3@xxxxxxxxxx; dario.faggioli@xxxxxxxxxx;
>> george.dunlap@xxxxxxxxxxxxx; Tian, Kevin <kevin.tian@xxxxxxxxx>; xen-
>> devel@xxxxxxxxxxxxx 
>> Subject: Re: [PATCH v3 4/6] Pause/Unpause the domain before/after assigning
>> PI hooks
>> 
>> >>> On 31.08.16 at 05:56, <feng.wu@xxxxxxxxx> wrote:
>> > --- a/xen/arch/x86/hvm/vmx/vmx.c
>> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> > @@ -219,8 +219,19 @@ void vmx_pi_hooks_assign(struct domain *d)
>> >
>> >      ASSERT(!d->arch.hvm_domain.vmx.vcpu_block);
>> >
>> > +    /*
>> > +     * Pausing the domain can make sure the vCPU is not
>> > +     * running and hence calling the hooks simultaneously
>> > +     * when deassigning the PI hooks. This makes sure that
>> > +     * all the appropriate state of PI descriptor is actually
>> > +     * set up for all vCPus before leaving this function.
>> > +     */
>> > +    domain_pause(d);
>> > +
>> >      d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block;
>> >      d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume;
>> > +
>> > +    domain_unpause(d);
>> >  }
>> 
>> First of all I'm missing a word on whether the race mentioned in
>> the description and comment can actually happen. Device
>> (de)assignment should already be pretty much serialized (via
>> the domctl lock, and maybe also via the pcidevs one).
> 
> The purpose of this patch is to address the race condition that
> the _vCPU_ is running while we are installing these hooks. Do you
> think this cannot happen?  This patch is trying to fix the issue
> described at:
> http://www.gossamer-threads.com/lists/xen/devel/433229 
> Consider that the other two hooks were installed when the VM
> is created, seems no such race condition. However, according
> to the discussion about patch 1 and patch 2 of series, we need
> to install the other two hooks here as well,

I don't think we've agreed that the creation time installation of
those hooks is actually necessary. In fact your most recent
response to patch 1 makes me think you now agree we don't
need to do so. And hence with that precondition not holding
anymore, I don't think the conclusion does.

> then the race
> condition comes again, so we still need to handle it.
> 
>> 
>> And then - isn't this overkill? Wouldn't a simple spin lock, taken
>> here and in the deassign counterpart, do?
>> 
>> Or wait - is the comment perhaps wrongly talking about deassign?
> 
> Oh, yes, there are something wrong in the comments, this patch
> has nothing to do with the _deassign_ stuff. The comments should
> be like below:
> 
> +    /*
> +     * Pausing the domain can make sure the vCPU is not
> +     * running and hence calling the hooks simultaneously
> +     * when _assigning_ the PI hooks. This makes sure that
> +     * all the appropriate state of PI descriptor is actually
> +     * set up for all vCPus before leaving this function.
> +     */
> 
> Sorry for that.
> 
>> 
>> If so the change is still questionable, as the hooks get set before
>> the first device gets actually assigned to a guest (I remember
>> that I insisted on things getting done that way when those
>> original patches had been under review).
> 
> Yes, the hooks were installed before the first device gets assigned.
> Then could you please elaborate what is the question here?

The question here is whether this patch (taking into consideration
comments on patches earlier in the series) is (a) needed and if so
(b) reasonable in how it achieves the goal. Pausing a domain
shouldn't really become a thing we routinely do.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.