[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/apicv: fix RTC periodic timer and apicv issue



>>> On 24.09.16 at 01:34, <kevin.tian@xxxxxxxxx> wrote:
>>  From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: Friday, September 23, 2016 11:34 PM
>> 
>> >>> On 20.09.16 at 15:30, <xuquan8@xxxxxxxxxx> wrote:
>> > --- a/xen/arch/x86/hvm/vlapic.c
>> > +++ b/xen/arch/x86/hvm/vlapic.c
>> > @@ -433,6 +433,12 @@ void vlapic_EOI_set(struct vlapic *vlapic)
>> >  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
>> >  {
>> >      struct domain *d = vlapic_domain(vlapic);
>> > +    struct vcpu *v = vlapic_vcpu(vlapic);
>> > +    struct hvm_intack pt_intack;
>> > +
>> > +    pt_intack.vector = vector;
>> > +    pt_intack.source = hvm_intsrc_lapic;
>> > +    pt_intr_post(v, pt_intack);
>> 
>> This also sits on the EOI LAPIC register write path, i.e. the change
>> then also affects non-apicv environments.
> 
> The new logic should be entered only when EOI-induced exit happens.

To me this reply reads ambiguously: Does "should" here mean the
patch needs further adjustment in your opinion, or that you think
the patch already does what is needed to ensure the new logic to
bet involved only in the EOI-induced exit case. To me it continues
to look like the former.

>> Furthermore - don't we have the same problem as with v1 again
>> then? What prevents multiple EOIs to come here before the timer
>> interrupt actually gets handled? You'd then clear ->irq_issued
>> each time, rendering your change to pt_update_irq() ineffective.
> 
> based on this patch, one irq_issued should cause only one EOI on
> related pt vector and this EOI exit clears irq_issued then next EOI
> would be seen only upon another injection when irq_issued is set
> again... However there might be an issue if this pt vector is shared 
> with another device interrupt, which although is not a typical case...

That's a common problem: I don't think we can consider only the
typical case. Or does hardware only deal with those, too?

And then the "should" here reads as ambiguously to me as the
earlier one, the more that you seem to consider EOIs for only the
one vector of interest, whereas my reply was specifically meant
to cover also EOIs for other vectors.

>> > --- a/xen/arch/x86/hvm/vmx/intr.c
>> > +++ b/xen/arch/x86/hvm/vmx/intr.c
>> > @@ -333,8 +333,6 @@ void vmx_intr_assist(void)
>> >              clear_bit(i, &v->arch.hvm_vmx.eoi_exitmap_changed);
>> >              __vmwrite(EOI_EXIT_BITMAP(i), 
>> > v->arch.hvm_vmx.eoi_exit_bitmap[i]);
>> >          }
>> > -
>> > -        pt_intr_post(v, intack);
>> >      }
>> 
>> I'll defer to the VMX maintainers to determine whether removing this
>> but not the other one is correct.
> 
> This is correct. The other one is for non-apicv scenario.

Sure, but the other path will also get used under certain conditions
even when apicv is in use.

>> > --- a/xen/arch/x86/hvm/vpt.c
>> > +++ b/xen/arch/x86/hvm/vpt.c
>> > @@ -252,7 +252,8 @@ int pt_update_irq(struct vcpu *v)
>> >              }
>> >              else
>> >              {
>> > -                if ( (pt->last_plt_gtime + pt->period) < max_lag )
>> > +                if ( (pt->last_plt_gtime + pt->period) < max_lag &&
>> > +                     !pt->irq_issued )
>> >                  {
>> >                      max_lag = pt->last_plt_gtime + pt->period;
>> >                      earliest_pt = pt;
>> 
>> Looking at the rest of the code I really wonder why this check
>> hadn't been there from the beginning. Tim, do recall whether
>> this was intentional (as opposed to simply having been an
>> oversight)?
> 
> Possibly simplify the emulation by relying on IRR/ISR to handling pending
> situation?

Again I'm suffering ambiguity here: Do you suggest a possible
explanation for why things are the way they are, or a possible
code adjustment to be done?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.