[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] vmx, apicv: fix save/restore issue with apicv



Jan Beulich wrote on 2014-10-14:
> >>> On 14.10.14 at 07:43, <yang.z.zhang@xxxxxxxxx> wrote:
> > Jan Beulich wrote on 2014-10-13:
> >> >>> On 11.10.14 at 09:54, <yang.z.zhang@xxxxxxxxx> wrote:
> >> > @@ -1597,6 +1599,29 @@ static void vmx_process_isr(int isr, struct
> >> > vcpu
> >> *v)
> >> >          status |= isr << VMX_GUEST_INTR_STATUS_SVI_OFFSET;
> >> >          __vmwrite(GUEST_INTR_STATUS, status);
> >> >      }
> >> > +
> >> > +    eoi_exit_bitmap = (unsigned int
> >> > + *)v->arch.hvm_vmx.eoi_exit_bitmap;
> >>
> >> No casts like this please. This is a bitmap and would hence
> >> preferably be treated that way consistently. The code here isn't
> performance critical.
> >
> > Yes, it's performance critical from the live migration's point and
> > that's why I use the cast here and
> 
> Compared to everything else involved in migration, a few extra cycles here

Xen is far behind KVM on live migration. If we want to remain the competitive, 
I'd suggest doing a cleanup on current migration logic to make it more 
effective.

> surely don't matter at all. What does matter is maintainability of this 
> already
> not always easy to understand code.
> 
> >> > +    /*
> >> > +     * We cannot simple copy the TMR as eoi exit bitmap due to the
> special
> >> > +     * periodic timer interrupt which is edge trigger but need a
> mandatory
> >> > +     * EOI-induced VM exit to let pending periodic timer
> >> > + interrupts have
> >> the
> >> > +     * chance to be injected for compensation.
> >> > +     * Here we will construct the eoi exit bimap via (IRR | ISR). Though
> it
> >> > +     * may cause unnecessary eoi exit of the interrupts that
> >> > + pending in IRR
> >> or
> >> > +     * ISR during save/resotre, each pending interrupt only causes
> >> > + one
> >> vmexit.
> >> > +     * The subsequent interrupts will adjust the eoi exit bitmap
> correctly.
> >> So
> >> > +     * the performace hurt can be ingored.
> >> > +     */
> >> > +    for ( i = 0x10; i < 0x80; i += 0x10 )
> >>
> >> So you skip the first 32 vectors instead of just the first 16. That's
> >> not in line with the APIC definitions. Also basing the loop variable
> >> on APIC register numbers is kind of odd. I'd really suggest using
> >> something more natural, like vector space (which would also allow you to
> use existing helper functions/macros).
> >>
> >
> > ...skip the first 32 vectors and operates based on group instead
> > vector
> >
> > Also, Spec says "21-31 - Intel reserved. Do not use." So it's ok to skip 
> > them.
> 
> Please distinguish APIC specification and CPU specification here.
> The APIC one only really precludes 0...15 to be used.

Ok. I will check the APIC specification.

> 
> >> > +    {
> >> > +        val = vlapic_get_reg(vlapic, APIC_IRR + i);
> >> > +        val |= vlapic_get_reg(vlapic, APIC_ISR + i);
> >>
> >> The comment above doesn't really make clear why not just IRR.
> >
> > Upon acceptance of an interrupt into the IRR, the corresponding TMR
> > will be changed. This means an interrupt either in IRR or in ISR has
> > no chance to update the TMR and eoi exit bitmap. So we need to check
> > IRR + ISR while rebuilding the eoi exit bitmap. It is not enough to check 
> > only
> IRR.
> 
> But then why not look at TMR right away? It's being kept up to date as long as
> the guest is running, and gets migrated over afterwards. Considering what
> vmx_update_eoi_exit_bitmap()) does, you really want to or TMR into
> EOI_EXIT_BITMAP (which post-migration could just be an assignment rather
> than an or).

No, we cannot use TMR directly. As I said in the comments, the periodic timer 
interrupt is tricky which is edge-trigger but need an EOI vmexit to let the 
pending interrupts have the chance to be injected for compensation.

> 
> Jan

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.