[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 15/18] vmx: Properly handle notification event when vCPU is running



>>> On 25.08.15 at 03:57, <feng.wu@xxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2035,6 +2035,52 @@ static void pi_wakeup_interrupt(struct cpu_user_regs 
> *regs)
>      this_cpu(irq_count)++;
>  }
>  
> +/* Handle VT-d posted-interrupt when VCPU is running. */
> +static void pi_notification_interrupt(struct cpu_user_regs *regs)
> +{
> +    ack_APIC_irq();
> +
> +    /*
> +     * We get here when a vCPU is running in root-mode (such as via 
> hypercall,
> +     * or any other reasons which can result in VM-Exit), and before vCPU is
> +     * back to non-root, external interrupts from an assigned device happen
> +     * and a notification event is delivered to this logical CPU.
> +     *
> +     * we need to set VCPU_KICK_SOFTIRQ for the current cpu, just like
> +     * __vmx_deliver_posted_interrupt(). So the pending interrupt in PIRR 
> will
> +     * be synced to vIRR before VM-Exit in time.
> +     *
> +     * Please refer to the following code fragments from
> +     * xen/arch/x86/hvm/vmx/entry.S:
> +     *
> +     *     .Lvmx_do_vmentry
> +     *
> +     *      ......
> +     *      point 1
> +     *
> +     *      cmp  %ecx,(%rdx,%rax,1)

I think retaining the "cli" right above this is quite critical for
understanding why code past this check isn't susceptible to the
issue anymore.

> +     *      jnz  .Lvmx_process_softirqs
> +     *
> +     *      ......
> +     *
> +     *      je   .Lvmx_launch
> +     *
> +     *      ......
> +     *
> +     *     .Lvmx_process_softirqs:
> +     *      sti
> +     *      call do_softirq
> +     *      jmp  .Lvmx_do_vmentry
> +     *
> +     * If VT-d engine issues a notification event at point 1 above, it cannot
> +     * be delivered to the guest during this VM-entry without raising the
> +     * softirq in this notification handler.
> +     */
> +    raise_softirq(VCPU_KICK_SOFTIRQ);
> +
> +    this_cpu(irq_count)++;
> +}
> +
>  const struct hvm_function_table * __init start_vmx(void)
>  {
>      set_in_cr4(X86_CR4_VMXE);
> @@ -2073,7 +2119,7 @@ const struct hvm_function_table * __init start_vmx(void)
>  
>      if ( cpu_has_vmx_posted_intr_processing )
>      {
> -        alloc_direct_apic_vector(&posted_intr_vector, event_check_interrupt);
> +        alloc_direct_apic_vector(&posted_intr_vector, 
> pi_notification_interrupt);
>  
>          if ( iommu_intpost )
>              alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);

Considering that you do this setup independent of iommu_intpost,
won't this (namely, but not only) for the !iommu_inpost case result
in a whole lot of useless softirqs to be raised? IOW - shouldn't this
setup be conditional, and shouldn't the handler also only
conditionally raise the softirq (to as much as possible limit their
amount)?

Yang, in this context: Why does __vmx_deliver_posted_interrupt()
not use cpu_raise_softirq(), instead kind of open coding it (see your
d7dafa375b ["VMX: Add posted interrupt supporting"])?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.