[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending virq to guest



On 11.11.20 16:45, Jan Beulich wrote:
On 09.11.2020 10:50, Juergen Gross wrote:
@@ -83,14 +85,28 @@ void passive_domain_destroy(struct vcpu *v)
                model->free_msr(v);
  }
+bool nmi_oprofile_send_virq(void)
+{
+       struct vcpu *v = this_cpu(nmi_cont_vcpu);
+
+       if ( v )
+               send_guest_vcpu_virq(v, VIRQ_XENOPROF);
+
+       this_cpu(nmi_cont_vcpu) = NULL;

What if, by the time we make it here, a 2nd NMI has arrived? I
agree the next overflow interrupt shouldn't arrive this
quickly, but I also think you want to zap the per-CPU variable
first here, and ...

How could that happen? This function is activated only from NMI
context in case the NMI happened in guest mode. And it will be
executed with higher priority than any guest, so there is a zero
chance another NMI in guest mode can happen in between.

I can add a comment in this regard if you want.


+
+       return v;
+}
+
  static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
  {
        int xen_mode, ovf;
ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
        xen_mode = ring_0(regs);
-       if ( ovf && is_active(current->domain) && !xen_mode )
-               send_guest_vcpu_virq(current, VIRQ_XENOPROF);
+       if ( ovf && is_active(current->domain) && !xen_mode ) {
+               this_cpu(nmi_cont_vcpu) = current;

... avoid overwriting any non-NULL value here. That's then of
course still not closing the window, but has (imo) overall
better behavior.

Also, style-wise, going through the file it looks to be mainly
Linux style, so may I suggest your additions / changes to be
done that way, rather than extending use of this funny mixed
style?

Works for me.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.