[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.13] x86/vmx: always sync PIR to IRR before vmentry



On Mon, Nov 18, 2019 at 01:26:46PM +0100, Jan Beulich wrote:
> On 18.11.2019 11:16, Roger Pau Monne wrote:
> > @@ -1954,48 +1952,28 @@ static void __vmx_deliver_posted_interrupt(struct 
> > vcpu *v)
> >       * 2. The target vCPU is the current vCPU and we're in non-interrupt
> >       * context.
> >       */
> > -    if ( running && (in_irq() || (v != current)) )
> > -    {
> > +    if ( vcpu_runnable(v) && v != current )
> 
> I'm afraid you need to be more careful with the running vs runnable
> distinction here. The comment above here becomes stale with the
> change (also wrt the removal of in_irq(), which I'm at least uneasy
> about), and the new commentary below also largely says/assumes
> "running", not "runnable".

I've missed to fix that comment, will take care in the next version.
Note also that the comment is quite pointless, it only states what the
code below is supposed to do, but doesn't give any reasoning as to why
in_irq is relevant here.

TBH I'm not sure of the point of the in_irq check, I don't think it's
relevant for the code here. It only matters whether the vCPU is
running, and whether it's running in this pCPU. Whether
__vmx_deliver_posted_interrupt is called from an irq context is
irrelevant AFAICT.

> 
> In general I think "runnable" is the more appropriate state to
> check for, as "running" is even more likely to change behind our
> backs. But of course there are caveats: When observing "running",
> v->processor is sufficiently certain to hold the pCPU the vCPU is
> running on or has been running on last. For "runnable" that's
> less helpful, because by the time you look at v->processor it may
> already have changed to whichever the vCPU is (about to be)
> running on.

I think this is fine. In fact it would also be fine from a safety PoV
to always send a posted interrupt IPI to the pCPU in v->processor: if
the vCPU is running PIR will be synced into IRR, otherwise a dummy
softirq will be recorded and PIR will be synced into IRR before
entering non-root mode.

v->is_running might be better in order to prevent sending pointless
IPIs around, will fix in v2, since we would only like to send the IPI
when the vCPU is executing on a pCPU.

> >          /*
> > -         * Note: Only two cases will reach here:
> > -         * 1. The target vCPU is running on other pCPU.
> > -         * 2. The target vCPU is the current vCPU.
> > +         * If the vCPU is running on another pCPU send an IPI to the pCPU. 
> > When
> > +         * the IPI arrives, the target vCPU may be running in non-root 
> > mode,
> > +         * running in root mode, runnable or blocked. If the target vCPU is
> > +         * running in non-root mode, the hardware will sync PIR to vIRR for
> > +         * 'posted_intr_vector' is a special vector handled directly by the
> > +         * hardware.
> >           *
> > -         * Note2: Don't worry the v->processor may change. The vCPU being
> > -         * moved to another processor is guaranteed to sync PIR to vIRR,
> > -         * due to the involved scheduling cycle.
> > +         * If the target vCPU is running in root-mode, the interrupt 
> > handler
> > +         * starts to run.  Considering an IPI may arrive in the window 
> > between
> > +         * the call to vmx_intr_assist() and interrupts getting disabled, 
> > the
> > +         * interrupt handler should raise a softirq to ensure events will 
> > be
> > +         * delivered in time.
> >           */
> > -        unsigned int cpu = v->processor;
> > +        send_IPI_mask(cpumask_of(v->processor), posted_intr_vector);
> >  
> > -        /*
> > -         * For case 1, we send an IPI to the pCPU. When an IPI arrives, the
> > -         * target vCPU maybe is running in non-root mode, running in root
> > -         * mode, runnable or blocked. If the target vCPU is running in
> > -         * non-root mode, the hardware will sync PIR to vIRR for
> > -         * 'posted_intr_vector' is special to the pCPU. If the target vCPU 
> > is
> > -         * running in root-mode, the interrupt handler starts to run.
> > -         * Considering an IPI may arrive in the window between the call to
> > -         * vmx_intr_assist() and interrupts getting disabled, the interrupt
> > -         * handler should raise a softirq to ensure events will be 
> > delivered
> > -         * in time. If the target vCPU is runnable, it will sync PIR to
> > -         * vIRR next time it is chose to run. In this case, a IPI and a
> > -         * softirq is sent to a wrong vCPU which will not have any adverse
> > -         * effect. If the target vCPU is blocked, since vcpu_block() checks
> > -         * whether there is an event to be delivered through
> > -         * local_events_need_delivery() just after blocking, the vCPU must
> > -         * have synced PIR to vIRR. Similarly, there is a IPI and a softirq
> > -         * sent to a wrong vCPU.
> > -         */
> > -        if ( cpu != smp_processor_id() )
> > -            send_IPI_mask(cpumask_of(cpu), posted_intr_vector);
> > -        /*
> > -         * For case 2, raising a softirq ensures PIR will be synced to 
> > vIRR.
> > -         * As any softirq will do, as an optimization we only raise one if
> > -         * none is pending already.
> > -         */
> > -        else if ( !softirq_pending(cpu) )
> > -            raise_softirq(VCPU_KICK_SOFTIRQ);
> > -    }
> > +    /*
> > +     * If the vCPU is not runnable or if it's the one currently running in 
> > this
> > +     * pCPU there's nothing to do, the vmentry code will already sync the 
> > PIR
> > +     * to IRR when resuming execution.
> > +     */
> >  }
> 
> Just for my own understanding - the "already" here relates to the code
> addition you make to vmx_intr_assist()?

Yes.

> And then - is this true even for an interrupt hitting between
> vmx_intr_assist() returning and the subsequent CLI in
> vmx_asm_vmexit_handler()?

Should be, as that IPI will raise a softirq that would force a jump
into vmx_process_softirqs, which in turn will jump to the start of
vmx_do_vmentry, thus calling vmx_intr_assist again.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.