[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Fwd: [v3 12/15] vmx: posted-interrupt handling when vCPU is blocked



So, first of all, thanks Andrew for drawing my attention on this...

On Mon, 2015-06-29 at 18:35 +0100, Andrew Cooper wrote:
-- 
Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Citrix Inc.
> This patch includes the following aspects:
> - Add a global vector to wake up the blocked vCPU
>   when an interrupt is being posted to it (This
>   part was sugguested by Yang Zhang <yang.z.zhang@xxxxxxxxx>).
> - Adds a new per-vCPU tasklet to wakeup the blocked
>   vCPU. It can be used in the case vcpu_unblock
>   cannot be called directly.
> - Define two per-cpu variables:
>       * pi_blocked_vcpu:
>       A list storing the vCPUs which were blocked on this pCPU.
> 
>       * pi_blocked_vcpu_lock:
>       The spinlock to protect pi_blocked_vcpu.
> 
> Signed-off-by: Feng Wu <feng.wu@xxxxxxxxx>
> ---
>
I do have a question. I don't have much expertise with this part of the
code base (HVM stuff, low level details of event/interrupt delivery),
so, it may be a stupid one, in which case, sorry in advance for the
noise.

> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index b94ef6a..7db6009 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>  
> +/*
> + * Handle VT-d posted-interrupt when VCPU is blocked.
> + */
> +static void pi_wakeup_interrupt(struct cpu_user_regs *regs)
> +{
> +    struct arch_vmx_struct *vmx;
> +    unsigned int cpu = smp_processor_id();
> +
> +    spin_lock(&per_cpu(pi_blocked_vcpu_lock, cpu));
> +
> +    /*
> +     * FIXME: The length of the list depends on how many
> +     * vCPU is current blocked on this specific pCPU.
> +     * This may hurt the interrupt latency if the list
> +     * grows to too many entries.
> +     */
> +    list_for_each_entry(vmx, &per_cpu(pi_blocked_vcpu, cpu),
> +                        pi_blocked_vcpu_list)
> +        if ( vmx->pi_desc.on )
> +            tasklet_schedule(&vmx->pi_vcpu_wakeup_tasklet);
> +
> +    spin_unlock(&per_cpu(pi_blocked_vcpu_lock, cpu));
> +
> +    ack_APIC_irq();
> +    this_cpu(irq_count)++;
> +}
> +
Quoting the design document in patch 1:

+Here is the scenario for the usage of the new global vector:
+
+1. vCPU0 is running on pCPU0
+2. vCPU0 is blocked and vCPU1 is currently running on pCPU0
+3. An external interrupt from an assigned device occurs for vCPU0, if we
+still use 'posted_intr_vector' as the notification vector for vCPU0, the
+notification event for vCPU0 (the event will go to pCPU1) will be consumed
+by vCPU1 incorrectly (remember this is a special vector to CPU). The worst
+case is that vCPU0 will never be woken up again since the wakeup event
+for it is always consumed by other vCPUs incorrectly. So we need introduce
+another global vector, naming 'pi_wakeup_vector' to wake up the blocked vCPU.
+
+After using 'pi_wakeup_vector' for vCPU0, VT-d engine will issue notification
+event using this new vector. Since this new vector is not a SPECIAL one to CPU,
+it is just a normal vector. To cpu, it just receives an normal external 
interrupt,
+then we can get control in the handler of this new vector. In this case, 
hypervisor
+can do something in it, such as wakeup the blocked vCPU.

Let's assume that there are two vCPUs blocked, waiting for a (posted)
interrupt, on pCPU0, and that they are vCPU2 and vCPU4, while vCPU12 is
running there.

AFAIU the code above, when an interrupt arrives on pCPU0, you scan the
list, find both vCPU2 and vCPU4, which both have pi_desc.on set to true,
and hence you kick (via the tasklet) both of them?

Again, if this can't happen due to some details I ignore about PI,
sorry... :-)

Regards,
Dario





Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.