[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] x86/hvm: Extend HVM cpuid leaf with vcpu id



To perform certain hypercalls HVM guests need to use Xen's idea of
vcpu id, which may well not match the guest OS idea of CPU id.
This patch adds vcpu id to the HVM cpuid leaf allowing the guest
to build a mapping.

Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
Cc: Keir Fraser <keir@xxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
---
 xen/arch/x86/hvm/hvm.c              |    4 ++++
 xen/include/public/arch-x86/cpuid.h |    5 +++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 78f519d..d9a5706 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4189,6 +4189,10 @@ void hvm_hypervisor_cpuid_leaf(uint32_t sub_idx,
          * foreign pages) has valid IOMMU entries.
          */
         *eax |= XEN_HVM_CPUID_IOMMU_MAPPINGS;
+
+        /* Indicate presence of vcpu id and set it in ebx */
+        *eax |= XEN_HVM_CPUID_VCPU_ID_PRESENT;
+        *ebx = current->vcpu_id;
     }
 }
 
diff --git a/xen/include/public/arch-x86/cpuid.h 
b/xen/include/public/arch-x86/cpuid.h
index 6005dfe..8ccb6e1 100644
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -76,13 +76,14 @@
 /*
  * Leaf 5 (0x40000x04)
  * HVM-specific features
+ * EAX: Features
+ * EBX: VCPU ID
  */
-
-/* EAX Features */
 #define XEN_HVM_CPUID_APIC_ACCESS_VIRT (1u << 0) /* Virtualized APIC registers 
*/
 #define XEN_HVM_CPUID_X2APIC_VIRT      (1u << 1) /* Virtualized x2APIC 
accesses */
 /* Memory mapped from other domains has valid IOMMU entries */
 #define XEN_HVM_CPUID_IOMMU_MAPPINGS   (1u << 2)
+#define XEN_HVM_CPUID_VCPU_ID_PRESENT  (1u << 3) /* vcpu is present in EBX */
 
 #define XEN_CPUID_MAX_NUM_LEAVES 4
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.