[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 14/17] vmx: Properly handle notification event when vCPU is running
> -----Original Message----- > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@xxxxxxxxxx] > Sent: Thursday, August 13, 2015 1:02 AM > To: Wu, Feng > Cc: xen-devel@xxxxxxxxxxxxx; Keir Fraser; Tian, Kevin; Jan Beulich; Andrew > Cooper > Subject: Re: [Xen-devel] [PATCH v5 14/17] vmx: Properly handle notification > event when vCPU is running > > On Wed, Aug 12, 2015 at 10:35:35AM +0800, Feng Wu wrote: > > When a vCPU is running in Root mode and a notification event > > has been injected to it. we need to set VCPU_KICK_SOFTIRQ for > > the current cpu, so the pending interrupt in PIRR will be > > synced to vIRR before VM-Exit in time. > > > > CC: Kevin Tian <kevin.tian@xxxxxxxxx> > > CC: Keir Fraser <keir@xxxxxxx> > > CC: Jan Beulich <jbeulich@xxxxxxxx> > > CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > > Signed-off-by: Feng Wu <feng.wu@xxxxxxxxx> > > Acked-by: Kevin Tian <kevin.tian@xxxxxxxxx> > > --- > > v4: > > - Coding style. > > > > v3: > > - Make pi_notification_interrupt() static > > > > xen/arch/x86/hvm/vmx/vmx.c | 47 > +++++++++++++++++++++++++++++++++++++++++++++- > > 1 file changed, 46 insertions(+), 1 deletion(-) > > > > diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c > > index e80d888..c8a4371 100644 > > --- a/xen/arch/x86/hvm/vmx/vmx.c > > +++ b/xen/arch/x86/hvm/vmx/vmx.c > > @@ -2033,6 +2033,51 @@ static void pi_wakeup_interrupt(struct > cpu_user_regs *regs) > > this_cpu(irq_count)++; > > } > > > > +/* Handle VT-d posted-interrupt when VCPU is running. */ > > +static void pi_notification_interrupt(struct cpu_user_regs *regs) > > +{ > > + /* > > + * We get here when a vCPU is running in root-mode (such as via > hypercall, > > + * or any other reasons which can result in VM-Exit), and before vCPU > is > > + * back to non-root, external interrupts from an assigned device > happen > > + * and a notification event is delivered to this logical CPU. > > + * > > + * we need to set VCPU_KICK_SOFTIRQ for the current cpu, just like > > + * __vmx_deliver_posted_interrupt(). So the pending interrupt in PIRR > will > > + * be synced to vIRR before VM-Exit in time. > > + * > > + * Please refer to the following code fragments from > > + * xen/arch/x86/hvm/vmx/entry.S: > > + * > > + * .Lvmx_do_vmentry > > + * > > + * ...... > > + * point 1 > > + * > > + * cmp %ecx,(%rdx,%rax,1) > > + * jnz .Lvmx_process_softirqs > > + * > > + * ...... > > + * > > + * je .Lvmx_launch > > + * > > + * ...... > > + * > > + * .Lvmx_process_softirqs: > > + * sti > > + * call do_softirq > > + * jmp .Lvmx_do_vmentry > > + * > > + * If VT-d engine issues a notification event at point 1 above, it > > cannot > > + * be delivered to the guest during this VM-entry without raising the > > + * softirq in this notification handler. > > + */ > > + raise_softirq(VCPU_KICK_SOFTIRQ); > > + > > + ack_APIC_irq(); > > + this_cpu(irq_count)++; > > Most (except the AMD?) have ack_APIC_irq() and such done at the start > of the functions. Is there a particular need to diverge? Nothing special here, I can move it to the beginning of this function. Thanks, Feng > > > +} > > + > > const struct hvm_function_table * __init start_vmx(void) > > { > > set_in_cr4(X86_CR4_VMXE); > > @@ -2071,7 +2116,7 @@ const struct hvm_function_table * __init > start_vmx(void) > > > > if ( cpu_has_vmx_posted_intr_processing ) > > { > > - alloc_direct_apic_vector(&posted_intr_vector, > event_check_interrupt); > > + alloc_direct_apic_vector(&posted_intr_vector, > pi_notification_interrupt); > > > > if ( iommu_intpost ) > > alloc_direct_apic_vector(&pi_wakeup_vector, > pi_wakeup_interrupt); > > -- > > 2.1.0 > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@xxxxxxxxxxxxx > > http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |