[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/vpmu: Add get/put_vpmu() and VPMU_ENABLED





On 02/16/2017 11:59 AM, Jan Beulich wrote:
On 16.02.17 at 15:59, <boris.ostrovsky@xxxxxxxxxx> wrote:
vpmu_enabled() (used by hvm/pv_cpuid() to properly report 0xa leaf
for Intel processors) is based on the value of VPMU_CONTEXT_ALLOCATED
bit. This is problematic:
* For HVM guests VPMU context is allocated lazily, during the first
  access to VPMU MSRs. Since the leaf is typically queried before guest
  attempts to read or write the MSRs it is likely that CPUID will report
  no PMU support
* For PV guests the context is allocated eagerly but only in responce to
  guest's XENPMU_init hypercall. There is a chance that the guest will
  try to read CPUID before making this hypercall.

This patch introduces VPMU_ENABLED flag which is set (subject to vpmu_mode
constraints) during VCPU initialization for both PV and HVM guests. Since
this flag is expected to be managed together with vpmu_count, get/put_vpmu()
are added to simplify code.

I think VPMU_ENABLED is misleading, as it may as well mean the state
after the guest did enable it. How about VPMU_AVAILABLE?


OK. And then vpmu_enabled() should be renamed accordingly.


@@ -509,15 +498,63 @@ void vpmu_initialise(struct vcpu *v)
     if ( ret )
         printk(XENLOG_G_WARNING "VPMU: Initialization failed for %pv\n", v);

-    /* Intel needs to initialize VPMU ops even if VPMU is not in use */
-    if ( !is_priv_vpmu &&
-         (ret || (vpmu_mode == XENPMU_MODE_OFF) ||
-          (vpmu_mode == XENPMU_MODE_ALL)) )
+    return ret;
+}
+
+static void get_vpmu(struct vcpu *v)
+{
+    spin_lock(&vpmu_lock);
+
+    /*
+     * Count active VPMUs so that we won't try to change vpmu_mode while
+     * they are in use.
+     * vpmu_mode can be safely updated while dom0's VPMUs are active and
+     * so we don't need to include it in the count.
+     */
+    if ( !is_hardware_domain(v->domain) &&
+        (vpmu_mode & (XENPMU_MODE_SELF | XENPMU_MODE_HV)) )
+    {
+        vpmu_count++;
+        vpmu_set(vcpu_vpmu(v), VPMU_ENABLED);
+    }
+    else if ( is_hardware_domain(v->domain) &&
+              (vpmu_mode != XENPMU_MODE_OFF) )
+        vpmu_set(vcpu_vpmu(v), VPMU_ENABLED);
+
+    spin_unlock(&vpmu_lock);
+}
+
+static void put_vpmu(struct vcpu *v)
+{
+    if ( !vpmu_is_set(vcpu_vpmu(v), VPMU_ENABLED) )
+        return;
+
+    spin_lock(&vpmu_lock);
+
+    if ( !is_hardware_domain(v->domain) &&
+         (vpmu_mode & (XENPMU_MODE_SELF | XENPMU_MODE_HV)) )
     {
-        spin_lock(&vpmu_lock);
         vpmu_count--;
-        spin_unlock(&vpmu_lock);
+        vpmu_reset(vcpu_vpmu(v), VPMU_ENABLED);

I think you need to re-check VPMU_ENABLED after acquiring the lock,
in order to avoid decrementing vpmu_count twice in case of a race.

I can just move the check under the lock. This is never on critical path (and rarely will we come here with the bit off) so taking the lock unnecessarily shouldn't be a problem.


Also this new model basically limits the opportunity to change the
mode to the case where no guest at all is running, iiuc. Previously
this would have been possible with any number of guests running,
as long as none of them actually used the vPMU.

I don't think much changed. The only difference is that for PV guests we bump vpmu_count at VCPU creation as opposed to during the hypercall.

And HVM guests always incremented the count during vcpu_initialise().

With this patch we still can change between SELF and HV at any time, whether or not anyone is running.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.