[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue
On August 22, 2016 8:04 PM, <JBeulich@xxxxxxxx> wrote: >>>> On 22.08.16 at 13:41, <xuquan8@xxxxxxxxxx> wrote: >> On August 22, 2016 6:36 PM, <JBeulich@xxxxxxxx> wrote: >>>>>> On 19.08.16 at 14:58, <xuquan8@xxxxxxxxxx> wrote: >>>> From 9b2df963c13ad27e2cffbeddfa3267782ac3da2a Mon Sep 17 00:00:00 >>>> 2001 >>>> From: Quan Xu <xuquan8@xxxxxxxxxx> >>>> Date: Fri, 19 Aug 2016 20:40:31 +0800 >>>> Subject: [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv >>>> issue >>>> >>>> When Xen apicv is enabled, wall clock time is faster on Windows7-32 >>>> guest with high payload (with 2vCPU, captured from xentrace, in high >>>> payload, the count of IPI interrupt increases rapidly between these vCPUs). >>>> >>>> If IPI intrrupt (vector 0xe1) and periodic timer interrupt (vector >>>> 0xd1) are both pending (index of bit set in VIRR), unfortunately, >>>> the IPI intrrupt is high priority than periodic timer interrupt. Xen >>>> updates IPI interrupt bit set in VIRR to guest interrupt status >>>> (RVI) as a high priority and apicv (Virtual-Interrupt Delivery) >>>> delivers IPI interrupt within VMX non-root operation without a VM >>>> exit. Within VMX non-root operation, if periodic timer interrupt >>>> index of bit is set in VIRR and highest, the apicv delivers periodic timer >interrupt within VMX non-root operation as well. >>>> >>>> But in current code, if Xen doesn't update periodic timer interrupt >>>> bit set in VIRR to guest interrupt status (RVI) directly, Xen is not >>>> aware of this case to decrease the count (pending_intr_nr) of >>>> pending periodic timer interrupt, then Xen will deliver a periodic >>>> timer interrupt again. The guest receives more periodic timer >>>> interrupt. >>>> >>>> If the periodic timer interrut is delivered and not the highest >>>> priority, make Xen be aware of this case to decrease the count of >>>> pending periodic timer interrupt. >>> >>>I can see the issue you're trying to address, but for one - doesn't >>>this lead to other double accounting, namely once the pt irq becomes >>>the highest priority one? >>> >> >> It is does NOT lead to other double accounting.. >> As if the pt irq becomes the highest priority one, the intack is pt one.. >> Then: >> >> + else >> + pt_intr_post(v, intack); > >As just said in reply to Yang: If this is still the same interrupt instance as >in a >prior run through here which took the if() branch, this change looks like >having >the potential of double accounting. > I very appreciate your detail review. It looks like, but actually it doesn't happen. As the key parameter 'pt->irq_issued'.. In pt_update_irq(), once the PT irq is issued, set the pt->irq_issued.. In pt_intr_post(), clear the pt->irq_issued before touching the count 'pt->pending_intr_nr'.. According to your assumption, at the second call to pt_intr_post(), As if 'pt->irq_issued' is clear, pt is NULL in is_pt_irq() check, then return, there is no chance to touch the count 'pt->pending_intr_nr'.. ------ void pt_intr_post(struct vcpu *v, struct hvm_intack intack) { ... pt = is_pt_irq(v, intack); if ( pt == NULL ) { spin_unlock(&v->arch.hvm_vcpu.tm_lock); return; } ... ... } static struct periodic_time *is_pt_irq() { .... list_for_each_entry ( pt, head, list ) { if ( pt->pending_intr_nr && ________pt->irq_issued_______ && (intack.vector == pt_irq_vector(pt, intack.source)) ) return pt; } return NULL; } __IIUC__, this question is based on the following pseudocode detail the behavior of virtual-interrupt delivery is __not__ atomic: Vector <- RVI; VISR[Vector] <- 1; SVI <- Vector; VPPR<- Vector & F0H; VIRR[Vector] <- 0; IF any bits set in VIRR Then RVI<-highest index of bit set in VIRR ELSE RVI <-0 FI Deliver interrupt with Vector through IDT ... We'd better check this first, as Yang said, this is atomic.. Quan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |