[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 15/17] vmx: VT-d posted-interrupt core logic handling



On Thu, 2015-09-17 at 15:30 +0100, George Dunlap wrote:
> On 09/17/2015 01:40 PM, Dario Faggioli wrote:

> >> I haven't yet decided whether I prefer my original suggestion of
> >> switching the interrupt and putting things on the wake-up list in
> >> vcpu_block(), or of deferring adding things to the wake-up list until
> >> the actual context switch.
> >>
> > Sorry but I don't get what you mean with the latter.
> > 
> > I particular, I don't think I understand what you mean with and how it
> > would work to "defer[ring] adding things to the wake-up list until
> > actual context switch"... In what case would you defer stuff to context
> > switch?
> 
> So one option is to do the "blocking" stuff in an arch-specific call
> from vcpu_block():
> 
> vcpu_block()
>  set(_VPF_blocked)
>  v->arch.block()
>   - Add v to pcpu.pi_blocked_vcpu
>   - NV => pi_wakeup_vector
>  local_events_need_delivery()
>    hvm_vcpu_has_pending_irq()
> 
>  ...
>  context_switch(): nothing
> 
> The other is to do the "blocking" stuff in the context switch (similar
> to what's done now):
> 
> vcpu_block()
>   set(_VPF_blocked)
>   local_events_need_delivery()
>     hvm_vcpu_has_pending_irq()
>   ...
> context_switch
>    v->arch.block()
>     - Add v to pcpu.pi_blocked_vcpu
>     - NV => pi_wakeup_vector
> 
Ok, thanks for elaborating.

> If we do it the first way, and an interrupt comes in before the context
> switch is finished, it will call pi_wakeup_vector, which will DTRT --
> take v off the pi_blocked_vcpu list and call vcpu_unblock() (which in
> turn will set NV back to posted_intr_vector).
> 
> If we do it the second way, and an interrupt comes in before the context
> switch is finished, it will call posted_intr_vector.  We can, at that
> point, check to see if the current vcpu is marked as blocked.  If it is,
> we can call vcpu_unblock() without having to modify NV or worry about
> adding / removing the vcpu from the pi_blocked_vcpu list.
> 
Right.

> The thing I like about the first one is that it makes PI blocking the
> same as normal blocking -- everything happens in the same place; so the
> code is cleaner and easier to understand.
> 
Indeed.

> The thing I like about the second one is that it cleverly avoids having
> to do all the work of adding the vcpu to the list and then searching to
> remove it if the vcpu in question gets woken up on the way to being
> blocked.  So the code may end up being faster for workloads where that
> happens frequently.
> 
Maybe some instrumentation to figure out how frequent this is, at least
in a couple of (thought to be) common workloads can be added, and a few
test performed? Feng?

One thing that may be worth giving some thinking at is whether
interrupts are enabled or not. I mean, during vcpu_block()
SCHEDULE_SOFTIRQ is raised, for invoking the scheduler. Then (at least)
during schedule(), IRQs are disabled. They're re-enabled near the end of
schedule(), and re-disabled for the actual context switch ( ~= around
__context_switch()).

My point being that IRQs are going to be disabled for a significant
amount of time, between vcpu_block() and the actual context being
switched. And solution 2 requires IRQs to be enabled in order to avoid a
potentially pointless NV update, doesn't it? If yes, that may
(negatively) affect the probability of being able to actually benefit
from the optimization...

Anyway, I'm not sure. I think my gut feelings are in favour of solution
1, but, really, it's hard to tell without actually trying. 

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.