[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] dpci: Put the dpci back on the list if running on another CPU.



>>> On 02.02.15 at 18:44, <konrad.wilk@xxxxxxxxxx> wrote:
> On Mon, Feb 02, 2015 at 03:48:19PM +0000, Jan Beulich wrote:
>> >>> On 02.02.15 at 16:31, <konrad.wilk@xxxxxxxxxx> wrote:
>> > On Mon, Feb 02, 2015 at 03:19:33PM +0000, Jan Beulich wrote:
>> >> >>> On 02.02.15 at 15:29, <konrad.wilk@xxxxxxxxxx> wrote:
>> >> > --- a/xen/drivers/passthrough/io.c
>> >> > +++ b/xen/drivers/passthrough/io.c
>> >> > @@ -63,10 +63,37 @@ enum {
>> >> >  static void raise_softirq_for(struct hvm_pirq_dpci *pirq_dpci)
>> >> >  {
>> >> >      unsigned long flags;
>> >> > +    unsigned long old, new, val = pirq_dpci->state;
>> >> >  
>> >> > -    if ( test_and_set_bit(STATE_SCHED, &pirq_dpci->state) )
>> >> > -        return;
>> >> > +    /*
>> >> > +     * This cmpxch is a more clear version of:
>> >> > +     * if ( test_and_set_bit(STATE_SCHED, &pirq_dpci->state) )
>> >> > +     *         return;
>> >> > +     * since it also checks for STATE_RUN conditions.
>> >> > +     */
>> >> > +    for ( ;; )
>> >> > +    {
>> >> > +        new = 1 << STATE_SCHED;
>> >> > +        if ( val )
>> >> > +            new |= val;
>> >> 
>> >> Why the if()?
>> > 
>> > To 'or' the variable new with '1 << STATE_RUN' in case 'val' changed from
>> > the first read ('val = pirq_dpci->state') to the moment when
>> > we do the cmpxchg.
>> 
>> But if "val" is zero, the | simply will do nothing.
> 
> Correct. Keep in mind that 'new' is set to '1 << STATE_SCHED' at every
> loop iteration - so it ends up old = cmpxchg(.., 0, 1 << STATE_SCHED)
> (and old == 0, val == 0, so we end up breaking out of the loop).

Not sure what you're trying to explain to me here. The code you
have is equivalent to

+        new = (1 << STATE_SCHED) | val;

no matter what.

>> Didn't the original discussion (and issue) revolve around scheduling
>> while STATE_RUN was set? Hence the intention to wait for the flag
> 
> Yes.
>> to clear - but preferably in an explicit rather than implicit (as your
>> current and previous patch do) manner.
> 
> If we do explicitly we run risk of dead-lock. See below of an draft
> (not even compiled tested) of what I think you mean.

That's no different from the code you proposed before, just that
the live-lock is better hidden there: By re-raising a softirq from a
softirq handler, you arrange for yourself to be called again right
away.

> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -63,10 +63,35 @@ enum {
>  static void raise_softirq_for(struct hvm_pirq_dpci *pirq_dpci)
>  {
>      unsigned long flags;
> +    unsigned long old, new, val = pirq_dpci->state;
>  
> -    if ( test_and_set_bit(STATE_SCHED, &pirq_dpci->state) )
> -        return;
> -
> +    for ( ;; )
> +    {
> +        old = cmpxchg(&pirq_dpci->state, 0, 1 << STATE_SCHED);
> +        switch ( old )
> +        {
> +        case (1 << STATE_SCHED):
> +            /*
> +             * Whenever STATE_SCHED is set we MUST not schedule it.
> +             */
> +            return;
> +        case (1 << STATE_RUN) | (1 << STATE_SCHED):
> +        case (1 << STATE_RUN):
> +            /* Getting close to finish. Spin. */
> +            continue;
> +        }
> +        /*
> +         * If the 'state' is 0 we can schedule it.
> +         */
> +        if ( old == 0 )
> +            break;
> +    }
>      get_knownalive_domain(pirq_dpci->dom);
>  
>      local_irq_save(flags);

Yes, this looks better. And I again wonder whether STATE_*
couldn't simply become a tristate - dpci_softirq(), rather than setting
STATE_RUN and then clearing STATE_SCHED, could simply
cmpxchg(, STATE_SCHED, STATE_RUN), acting as necessary when
that operation fails.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.