[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue



> From: Xuquan (Euler) [mailto:xuquan8@xxxxxxxxxx]
> Sent: Tuesday, September 20, 2016 8:25 AM
> 
> On September 19, 2016 5:25 PM, Tian Kevin <kevin.tian@xxxxxxxxx> wrote:
> >> From: Xuquan (Euler) [mailto:xuquan8@xxxxxxxxxx]
> >> Sent: Monday, September 12, 2016 5:08 PM
> >>
> >> On September 12, 2016 3:58 PM, Tian, Kevin <kevin.tian@xxxxxxxxx> wrote:
> >> >> From: Xuquan (Euler) [mailto:xuquan8@xxxxxxxxxx]
> >> >> Sent: Friday, September 09, 2016 11:02 AM
> >> >>
> >> >> On August 30, 2016 1:58 PM, Tian Kevin < kevin.tian@xxxxxxxxx > wrote:
> >> >> >> From: Xuquan (Euler) [mailto:xuquan8@xxxxxxxxxx]
> >> >> >> Sent: Friday, August 19, 2016 8:59 PM
> >> >> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> >> >> index 1d5d287..cc247c3 100644
> >> >> --- a/xen/arch/x86/hvm/vlapic.c
> >> >> +++ b/xen/arch/x86/hvm/vlapic.c
> >> >> @@ -433,6 +433,11 @@ void vlapic_EOI_set(struct vlapic *vlapic)
> >> >> void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)  {
> >> >>      struct domain *d = vlapic_domain(vlapic);
> >> >> +    struct hvm_intack pt_intack;
> >> >> +
> >> >> +    pt_intack.vector = vector;
> >> >> +    pt_intack.source = hvm_intsrc_lapic;
> >> >> +    pt_intr_post(vlapic_vcpu(vlapic), pt_intack);
> >> >>
> >> >>      if ( vlapic_test_and_clear_vector(vector,
> >> >&vlapic->regs->data[APIC_TMR]) )
> >> >>          vioapic_update_EOI(d, vector); diff --git
> >> >> a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index
> >> >> 8fca08c..29d9bbf 100644
> >> >> --- a/xen/arch/x86/hvm/vmx/intr.c
> >> >> +++ b/xen/arch/x86/hvm/vmx/intr.c
> >> >> @@ -333,8 +333,6 @@ void vmx_intr_assist(void)
> >> >>              clear_bit(i, &v->arch.hvm_vmx.eoi_exitmap_changed);
> >> >>              __vmwrite(EOI_EXIT_BITMAP(i),
> >> >v->arch.hvm_vmx.eoi_exit_bitmap[i]);
> >> >>          }
> >> >> -
> >> >> -        pt_intr_post(v, intack);
> >> >>      }
> >> >>      else
> >> >>      {
> >> >>
> >> >
> >> >Because we update pt irq in every vmentry, there is a chance that
> >> >already-injected instance (before EOI-induced exit happens) will
> >> >incur another pending IRR setting if there is a VM-exit happens
> >> >between HW virtual interrupt injection (vIRR->0, vISR->1) and
> >> >EOI-induced exit (vISR->0), since pt_intr_post hasn't been invoked
> >> >yet. I guess this is the reason why you still see faster wallclock.
> >> >
> >>
> >> Agreed. A good description. My bad description is from another aspect.
> >>
> >> >I think you need mark this pending_intr_post situation explicitly.
> >> >Then pt_update_irq should skip such pt timer when pending_intr_post
> >> >of that timer is true (otherwise the update is meaningless since
> >> >previous one hasn't been posted yet). Then with your change to post
> >> >in EOI-induced exit handler, it should work correctly to meet the
> >> >goal
> >> >- one virtual interrupt delivery for one pending pt intr...
> >> >
> >> I think we are at least on the right track.
> >> But I can't follow ' pending_intr_post ', a new parameter? Thanks.
> >>
> >>
> >
> >yes, a new parameter to record whether a intr_post operation is pending
> 
> 
> The existing parameter ' irq_issued ' looks good. I have tested with below 
> modification last
> night, and it is working.
> If it is okay, I will send out v2..

yes, looks it could serve the purpose. 

> 
> Quan
> 
> ==== modification =====
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index 1d5d287..cc247c3 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -433,6 +433,11 @@ void vlapic_EOI_set(struct vlapic *vlapic)
>  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
>  {
>      struct domain *d = vlapic_domain(vlapic);
> +    struct hvm_intack pt_intack;
> +
> +    pt_intack.vector = vector;
> +    pt_intack.source = hvm_intsrc_lapic;
> +    pt_intr_post(vlapic_vcpu(vlapic), pt_intack);
> 
>      if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR]) 
> )
>          vioapic_update_EOI(d, vector);
> diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
> index 8fca08c..29d9bbf 100644
> --- a/xen/arch/x86/hvm/vmx/intr.c
> +++ b/xen/arch/x86/hvm/vmx/intr.c
> @@ -333,8 +333,6 @@ void vmx_intr_assist(void)
>              clear_bit(i, &v->arch.hvm_vmx.eoi_exitmap_changed);
>              __vmwrite(EOI_EXIT_BITMAP(i), 
> v->arch.hvm_vmx.eoi_exit_bitmap[i]);
>          }
> -
> -        pt_intr_post(v, intack);
>      }
>      else
>      {
> diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
> index 5c48fdb..620ca68 100644
> --- a/xen/arch/x86/hvm/vpt.c
> +++ b/xen/arch/x86/hvm/vpt.c
> @@ -267,6 +267,11 @@ int pt_update_irq(struct vcpu *v)
>          return -1;
>      }
> 
> +    if ( earliest_pt->irq_issued )
> +    {
> +        spin_unlock(&v->arch.hvm_vcpu.tm_lock);
> +        return -1;
> +    }

You need move the check within the loop, so other pt timers are
not blocked in such situation...

Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.