[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending virq to guest



On 09.11.2020 10:50, Juergen Gross wrote:
> @@ -83,14 +85,28 @@ void passive_domain_destroy(struct vcpu *v)
>               model->free_msr(v);
>  }
>  
> +bool nmi_oprofile_send_virq(void)
> +{
> +     struct vcpu *v = this_cpu(nmi_cont_vcpu);
> +
> +     if ( v )
> +             send_guest_vcpu_virq(v, VIRQ_XENOPROF);
> +
> +     this_cpu(nmi_cont_vcpu) = NULL;

What if, by the time we make it here, a 2nd NMI has arrived? I
agree the next overflow interrupt shouldn't arrive this
quickly, but I also think you want to zap the per-CPU variable
first here, and ...

> +
> +     return v;
> +}
> +
>  static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
>  {
>       int xen_mode, ovf;
>  
>       ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
>       xen_mode = ring_0(regs);
> -     if ( ovf && is_active(current->domain) && !xen_mode )
> -             send_guest_vcpu_virq(current, VIRQ_XENOPROF);
> +     if ( ovf && is_active(current->domain) && !xen_mode ) {
> +             this_cpu(nmi_cont_vcpu) = current;

... avoid overwriting any non-NULL value here. That's then of
course still not closing the window, but has (imo) overall
better behavior.

Also, style-wise, going through the file it looks to be mainly
Linux style, so may I suggest your additions / changes to be
done that way, rather than extending use of this funny mixed
style?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.