[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 09/28] xen/vm_event: consolidate CONFIG_VM_EVENT


  • To: Penny Zheng <Penny.Zheng@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
  • Date: Tue, 21 Oct 2025 16:24:32 +0300
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=7MCQ6yF7y2Sz8aPUUQ+8LMfjwgj5AzYkrB9mgTF8An4=; b=L7QH7VgiSkjxsLJWZKDVn4ZZU63lpJ/3QanXvWm+0XnAT8LFc05Z6LNaIWbQBiAt2ShGIaChezsFdB8BYRm1Lb0l54un9I9oHlh7danddRi9gbPNxHW+CJCfjjN9/L4gefl3bouPSpaEjXlv9comGiaRi43YkhwPfYborXA2gIoIUgRLoI+oLRXtjGaQ7XMXkZKL1Nr7IDDOtN6KJ13blCoYk2krnubcZtrkmx/+0OgtLUg0zhcawSu75rsjuvzqpaYvoxxY0mD2W1HJmG81/R6U9vLOp+tRN80KFynilHoxkd70Sj7WDK2qCEdCYfnQLZyhvTz1ttAIYfRUGvf1oA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=TuvngszkDyElHzYXOk2ySwixI+dx3TR3f5B/mjVNNOZL1qRvTN+u12OYow6dWiphnqHS4eWcNJDbZbgFPMWYzE+h4sGxxsCXtIZlV9m/RTXb8oyZ0Ua54MQVbS0KwFZojqbHslZISwnUTF5XA64GlnT+/ZkUxPoaDZazIaolS6pdlcTzqDvQXz0sqDA7iTNbfea0RaTxtDos2EAtRQ52LPovF2MvRIGCem3wVyH1Tvf8scCST6zLUvye92qBjWPGwhnjzDtDuzRaHRsnzkzSbJR/8xGY8a6wMUCgQBt+iO8M9UlYWtdVst0pB/gxYnExnfkLCa2zQsIwbXU/wt1RaQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: ray.huang@xxxxxxx, oleksii.kurochko@xxxxxxxxx, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>, Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>, "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 21 Oct 2025 13:31:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi

On 13.10.25 13:15, Penny Zheng wrote:
File hvm/vm_event.c and x86/vm_event.c are the extend to vm_event handling
routines, and its compilation shall be guarded by CONFIG_VM_EVENT too.
Futhermore, features about monitor_op and memory access are both based on
vm event subsystem, so monitor.o/mem_access.o shall be wrapped under
CONFIG_VM_EVENT.

Although CONFIG_VM_EVENT is right now forcibly enabled on x86 via
MEM_ACCESS_ALWAYS_ON, we could disable it through disabling
CONFIG_MGMT_HYPERCALLS later. So we remove MEM_ACCESS_ALWAYS_ON and
make VM_EVENT=y on default only on x86 to retain the same.

In consequence, a few switch-blocks need in-place stubs in do_altp2m_op()
to pass compilation when ALTP2M=y and VM_EVENT=n(, hence MEM_ACCESS=n), like
HVMOP_altp2m_set_mem_access, etc.
And the following functions still require stubs to pass compilation:
- vm_event_check_ring()
- p2m_mem_access_check()
- xenmem_access_to_p2m_access()

The following functions are developed on the basis of vm event framework, or
only invoked by vm_event.c/monitor.c/mem_access.c, so they all shall be
wrapped with CONFIG_VM_EVENT (otherwise they will become unreachable and
violate Misra rule 2.1 when VM_EVENT=n):
- hvm_toggle_singlestep
- hvm_fast_singlestep
- hvm_enable_msr_interception
   - hvm_function_table.enable_msr_interception
- hvm_has_set_descriptor_access_existing
   - hvm_function_table.set_descriptor_access_existing
- arch_monitor_domctl_op
- arch_monitor_allow_userspace
- arch_monitor_get_capabilities
- hvm_emulate_one_vm_event
- 
hvmemul_write{,cmpxchg,rep_ins,rep_outs,rep_movs,rep_stos,read_io,write_io}_discard

Signed-off-by: Penny Zheng <Penny.Zheng@xxxxxxx>
---
v1 -> v2:
- split out XSM changes
- remove unnecessary stubs
- move "struct p2m_domain" declaration ahead of the #ifdef
---
v2 -> v3:
- move .enable_msr_interception and .set_descriptor_access_exiting together
- with the introduction of "vm_event_is_enabled()", all hvm_monitor_xxx()
stubs are no longer needed
- change to use in-place stubs in do_altp2m_op()
- no need to add stub for monitor_traps(), __vm_event_claim_slot(),
vm_event_put_request() and vm_event_vcpu_pause()
- remove MEM_ACCESS_ALWAYS_ON
- return default p2m_access_rwx for xenmem_access_to_p2m_access() when
VM_EVENT=n
- add wrapping for hvm_emulate_one_vm_event/
hvmemul_write{,cmpxchg,rep_ins,rep_outs,rep_movs,rep_stos,read_io,write_io}_discard
---
  xen/arch/x86/Makefile                 |  2 +-
  xen/arch/x86/hvm/Kconfig              |  1 -
  xen/arch/x86/hvm/Makefile             |  4 +-
  xen/arch/x86/hvm/emulate.c            | 58 ++++++++++++++-------------
  xen/arch/x86/hvm/hvm.c                | 21 ++++++++++
  xen/arch/x86/hvm/svm/svm.c            |  8 +++-
  xen/arch/x86/hvm/vmx/vmx.c            | 10 +++++
  xen/arch/x86/include/asm/hvm/hvm.h    |  9 ++++-
  xen/arch/x86/include/asm/mem_access.h |  9 +++++
  xen/arch/x86/include/asm/monitor.h    |  9 +++++
  xen/common/Kconfig                    |  7 +---
  xen/include/xen/mem_access.h          | 10 +++++
  xen/include/xen/vm_event.h            |  7 ++++
  13 files changed, 116 insertions(+), 39 deletions(-)

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 0f91ffcb9d..615cd101b8 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -76,7 +76,7 @@ obj-y += usercopy.o
  obj-y += x86_emulate.o
  obj-$(CONFIG_TBOOT) += tboot.o
  obj-y += hpet.o
-obj-y += vm_event.o
+obj-$(CONFIG_VM_EVENT) += vm_event.o
  obj-y += xstate.o
ifneq ($(CONFIG_PV_SHIM_EXCLUSIVE),y)
diff --git a/xen/arch/x86/hvm/Kconfig b/xen/arch/x86/hvm/Kconfig
index 5cb9f29042..e6b388dd0e 100644
--- a/xen/arch/x86/hvm/Kconfig
+++ b/xen/arch/x86/hvm/Kconfig
@@ -3,7 +3,6 @@ menuconfig HVM
        default !PV_SHIM
        select COMPAT
        select IOREQ_SERVER
-       select MEM_ACCESS_ALWAYS_ON
        help
          Interfaces to support HVM domains.  HVM domains require hardware
          virtualisation extensions (e.g. Intel VT-x, AMD SVM), but can boot
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index 6ec2c8f2db..952db00dd7 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -16,7 +16,7 @@ obj-y += io.o
  obj-y += ioreq.o
  obj-y += irq.o
  obj-y += mmio.o
-obj-y += monitor.o
+obj-$(CONFIG_VM_EVENT) += monitor.o
  obj-y += mtrr.o
  obj-y += nestedhvm.o
  obj-y += pmtimer.o
@@ -26,7 +26,7 @@ obj-y += save.o
  obj-y += stdvga.o
  obj-y += vioapic.o
  obj-y += vlapic.o
-obj-y += vm_event.o
+obj-$(CONFIG_VM_EVENT) += vm_event.o
  obj-y += vmsi.o
  obj-y += vpic.o
  obj-y += vpt.o
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index fe75b0516d..d56ef02baf 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1615,6 +1615,7 @@ static int cf_check hvmemul_blk(
      return rc;
  }
+#ifdef CONFIG_VM_EVENT
  static int cf_check hvmemul_write_discard(
      enum x86_segment seg,
      unsigned long offset,
@@ -1717,6 +1718,7 @@ static int cf_check hvmemul_cache_op_discard(
  {
      return X86EMUL_OKAY;
  }
+#endif /* CONFIG_VM_EVENT */
static int cf_check hvmemul_cmpxchg(
      enum x86_segment seg,
@@ -2750,33 +2752,6 @@ static const struct x86_emulate_ops hvm_emulate_ops = {
      .vmfunc        = hvmemul_vmfunc,
  };
-static const struct x86_emulate_ops hvm_emulate_ops_no_write = {
-    .read          = hvmemul_read,
-    .insn_fetch    = hvmemul_insn_fetch,
-    .write         = hvmemul_write_discard,
-    .cmpxchg       = hvmemul_cmpxchg_discard,
-    .rep_ins       = hvmemul_rep_ins_discard,
-    .rep_outs      = hvmemul_rep_outs_discard,
-    .rep_movs      = hvmemul_rep_movs_discard,
-    .rep_stos      = hvmemul_rep_stos_discard,
-    .read_segment  = hvmemul_read_segment,
-    .write_segment = hvmemul_write_segment,
-    .read_io       = hvmemul_read_io_discard,
-    .write_io      = hvmemul_write_io_discard,
-    .read_cr       = hvmemul_read_cr,
-    .write_cr      = hvmemul_write_cr,
-    .read_xcr      = hvmemul_read_xcr,
-    .write_xcr     = hvmemul_write_xcr,
-    .read_msr      = hvmemul_read_msr,
-    .write_msr     = hvmemul_write_msr_discard,
-    .cache_op      = hvmemul_cache_op_discard,
-    .tlb_op        = hvmemul_tlb_op,
-    .cpuid         = x86emul_cpuid,
-    .get_fpu       = hvmemul_get_fpu,
-    .put_fpu       = hvmemul_put_fpu,
-    .vmfunc        = hvmemul_vmfunc,
-};
-
  /*
   * Note that passing VIO_no_completion into this function serves as kind
   * of (but not fully) an "auto select completion" indicator.  When there's
@@ -2887,6 +2862,34 @@ int hvm_emulate_one(
      return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops, completion);
  }
+#ifdef CONFIG_VM_EVENT
+static const struct x86_emulate_ops hvm_emulate_ops_no_write = {
+    .read          = hvmemul_read,
+    .insn_fetch    = hvmemul_insn_fetch,
+    .write         = hvmemul_write_discard,
+    .cmpxchg       = hvmemul_cmpxchg_discard,
+    .rep_ins       = hvmemul_rep_ins_discard,
+    .rep_outs      = hvmemul_rep_outs_discard,
+    .rep_movs      = hvmemul_rep_movs_discard,
+    .rep_stos      = hvmemul_rep_stos_discard,
+    .read_segment  = hvmemul_read_segment,
+    .write_segment = hvmemul_write_segment,
+    .read_io       = hvmemul_read_io_discard,
+    .write_io      = hvmemul_write_io_discard,
+    .read_cr       = hvmemul_read_cr,
+    .write_cr      = hvmemul_write_cr,
+    .read_xcr      = hvmemul_read_xcr,
+    .write_xcr     = hvmemul_write_xcr,
+    .read_msr      = hvmemul_read_msr,
+    .write_msr     = hvmemul_write_msr_discard,
+    .cache_op      = hvmemul_cache_op_discard,
+    .tlb_op        = hvmemul_tlb_op,
+    .cpuid         = x86emul_cpuid,
+    .get_fpu       = hvmemul_get_fpu,
+    .put_fpu       = hvmemul_put_fpu,
+    .vmfunc        = hvmemul_vmfunc,
+};
+
  void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
      unsigned int errcode)
  {
@@ -2949,6 +2952,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, 
unsigned int trapnr,
hvm_emulate_writeback(&ctx);
  }
+#endif /* CONFIG_VM_EVENT */
void hvm_emulate_init_once(
      struct hvm_emulate_ctxt *hvmemul_ctxt,
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 48a293069b..e3dacc909b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -50,6 +50,7 @@
  #include <asm/hvm/vm_event.h>
  #include <asm/hvm/vpt.h>
  #include <asm/i387.h>
+#include <asm/mem_access.h>
  #include <asm/mc146818rtc.h>
  #include <asm/mce.h>
  #include <asm/monitor.h>
@@ -4861,15 +4862,20 @@ static int do_altp2m_op(
          break;
case HVMOP_altp2m_set_mem_access:
+#ifdef CONFIG_VM_EVENT
          if ( a.u.mem_access.pad )
              rc = -EINVAL;
          else
              rc = p2m_set_mem_access(d, _gfn(a.u.mem_access.gfn), 1, 0, 0,
                                      a.u.mem_access.access,
                                      a.u.mem_access.view);
+#else
+        rc = -EOPNOTSUPP;
+#endif
          break;
case HVMOP_altp2m_set_mem_access_multi:
+#ifdef CONFIG_VM_EVENT
          if ( a.u.set_mem_access_multi.pad ||
               a.u.set_mem_access_multi.opaque > a.u.set_mem_access_multi.nr )
          {
@@ -4898,9 +4904,13 @@ static int do_altp2m_op(
                                         &a, u.set_mem_access_multi.opaque) )
                  rc = -EFAULT;
          }
+#else
+        rc = -EOPNOTSUPP;
+#endif
          break;
case HVMOP_altp2m_get_mem_access:
+#ifdef CONFIG_VM_EVENT
          if ( a.u.mem_access.pad )
              rc = -EINVAL;
          else
@@ -4915,6 +4925,9 @@ static int do_altp2m_op(
                  rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
              }
          }
+#else
+        rc = -EOPNOTSUPP;
+#endif
          break;
case HVMOP_altp2m_change_gfn:
@@ -5030,6 +5043,7 @@ static int compat_altp2m_op(
      switch ( a.cmd )
      {
      case HVMOP_altp2m_set_mem_access_multi:
+#ifdef CONFIG_VM_EVENT
  #define XLAT_hvm_altp2m_set_mem_access_multi_HNDL_pfn_list(_d_, _s_); \
          guest_from_compat_handle((_d_)->pfn_list, (_s_)->pfn_list)
  #define XLAT_hvm_altp2m_set_mem_access_multi_HNDL_access_list(_d_, _s_); \
@@ -5038,6 +5052,7 @@ static int compat_altp2m_op(
                                               &a.u.set_mem_access_multi);
  #undef XLAT_hvm_altp2m_set_mem_access_multi_HNDL_pfn_list
  #undef XLAT_hvm_altp2m_set_mem_access_multi_HNDL_access_list
+#endif
          break;
default:
@@ -5056,6 +5071,7 @@ static int compat_altp2m_op(
      switch ( a.cmd )
      {
      case HVMOP_altp2m_set_mem_access_multi:
+#ifdef CONFIG_VM_EVENT
          if ( rc == -ERESTART )
          {
              a.u.set_mem_access_multi.opaque =
@@ -5065,6 +5081,9 @@ static int compat_altp2m_op(
                                         &a, u.set_mem_access_multi.opaque) )
                  rc = -EFAULT;
          }
+#else
+        rc = -EOPNOTSUPP;
+#endif
          break;
default:
@@ -5283,6 +5302,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
      return rc;
  }
+#ifdef CONFIG_VM_EVENT
  void hvm_toggle_singlestep(struct vcpu *v)
  {
      ASSERT(atomic_read(&v->pause_count));
@@ -5292,6 +5312,7 @@ void hvm_toggle_singlestep(struct vcpu *v)
v->arch.hvm.single_step = !v->arch.hvm.single_step;
  }
+#endif /* CONFIG_VM_EVENT */
#ifdef CONFIG_ALTP2M
  void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 679ca3dacd..c8506c25c4 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -299,6 +299,7 @@ void svm_intercept_msr(struct vcpu *v, uint32_t msr, int 
flags)
          __clear_bit(msr * 2 + 1, msr_bit);
  }
+#ifdef CONFIG_VM_EVENT
  static void cf_check svm_enable_msr_interception(struct domain *d, uint32_t 
msr)
  {
      struct vcpu *v;
@@ -306,6 +307,7 @@ static void cf_check svm_enable_msr_interception(struct 
domain *d, uint32_t msr)
      for_each_vcpu ( d, v )
          svm_intercept_msr(v, msr, MSR_INTERCEPT_WRITE);
  }
+#endif /* CONFIG_VM_EVENT */
static void svm_save_dr(struct vcpu *v)
  {
@@ -826,6 +828,7 @@ static void cf_check svm_set_rdtsc_exiting(struct vcpu *v, 
bool enable)
      vmcb_set_general2_intercepts(vmcb, general2_intercepts);
  }
+#ifdef CONFIG_VM_EVENT
  static void cf_check svm_set_descriptor_access_exiting(
      struct vcpu *v, bool enable)
  {
@@ -843,6 +846,7 @@ static void cf_check svm_set_descriptor_access_exiting(
vmcb_set_general1_intercepts(vmcb, general1_intercepts);
  }
+#endif /* CONFIG_VM_EVENT */
static unsigned int cf_check svm_get_insn_bytes(struct vcpu *v, uint8_t *buf)
  {
@@ -2457,9 +2461,11 @@ static struct hvm_function_table __initdata_cf_clobber 
svm_function_table = {
      .fpu_dirty_intercept  = svm_fpu_dirty_intercept,
      .msr_read_intercept   = svm_msr_read_intercept,
      .msr_write_intercept  = svm_msr_write_intercept,
+#ifdef CONFIG_VM_EVENT
      .enable_msr_interception = svm_enable_msr_interception,
-    .set_rdtsc_exiting    = svm_set_rdtsc_exiting,
      .set_descriptor_access_exiting = svm_set_descriptor_access_exiting,
+#endif
+    .set_rdtsc_exiting    = svm_set_rdtsc_exiting,
      .get_insn_bytes       = svm_get_insn_bytes,
.nhvm_vcpu_initialise = nsvm_vcpu_initialise,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index a40af1db66..1996e139a0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1520,6 +1520,7 @@ static void cf_check vmx_set_rdtsc_exiting(struct vcpu 
*v, bool enable)
      vmx_vmcs_exit(v);
  }
+#ifdef CONFIG_VM_EVENT
  static void cf_check vmx_set_descriptor_access_exiting(
      struct vcpu *v, bool enable)
  {
@@ -1534,6 +1535,7 @@ static void cf_check vmx_set_descriptor_access_exiting(
      vmx_update_secondary_exec_control(v);
      vmx_vmcs_exit(v);
  }
+#endif /* CONFIG_VM_EVENT */
static void cf_check vmx_init_hypercall_page(void *p)
  {
@@ -2413,6 +2415,7 @@ static void cf_check vmx_handle_eoi(uint8_t vector, int 
isr)
          printk_once(XENLOG_WARNING "EOI for %02x but SVI=%02x\n", vector, 
old_svi);
  }
+#ifdef CONFIG_VM_EVENT
  static void cf_check vmx_enable_msr_interception(struct domain *d, uint32_t 
msr)
  {
      struct vcpu *v;
@@ -2420,6 +2423,7 @@ static void cf_check vmx_enable_msr_interception(struct 
domain *d, uint32_t msr)
      for_each_vcpu ( d, v )
          vmx_set_msr_intercept(v, msr, VMX_MSR_W);
  }
+#endif /* CONFIG_VM_EVENT */
#ifdef CONFIG_ALTP2M @@ -2871,7 +2875,9 @@ static struct hvm_function_table __initdata_cf_clobber vmx_function_table = {
      .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
      .update_vlapic_mode = vmx_vlapic_msr_changed,
      .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
+#ifdef CONFIG_VM_EVENT
      .enable_msr_interception = vmx_enable_msr_interception,
+#endif
  #ifdef CONFIG_ALTP2M
      .altp2m_vcpu_update_p2m = vmx_vcpu_update_eptp,
      .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve,
@@ -3079,9 +3085,11 @@ const struct hvm_function_table * __init start_vmx(void)
vmx_function_table.caps.singlestep = cpu_has_monitor_trap_flag; +#ifdef CONFIG_VM_EVENT
      if ( cpu_has_vmx_dt_exiting )
          vmx_function_table.set_descriptor_access_exiting =
              vmx_set_descriptor_access_exiting;
+#endif
/*
       * Do not enable EPT when (!cpu_has_vmx_pat), to prevent security hole
@@ -3152,8 +3160,10 @@ void __init vmx_fill_funcs(void)
      if ( !cpu_has_xen_ibt )
          return;
+#ifdef CONFIG_VM_EVENT
      vmx_function_table.set_descriptor_access_exiting =
          vmx_set_descriptor_access_exiting;
+#endif
vmx_function_table.update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap;
      vmx_function_table.process_isr            = vmx_process_isr;
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h 
b/xen/arch/x86/include/asm/hvm/hvm.h
index f02183691e..473cf24b83 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -192,7 +192,10 @@ struct hvm_function_table {
      void (*handle_cd)(struct vcpu *v, unsigned long value);
      void (*set_info_guest)(struct vcpu *v);
      void (*set_rdtsc_exiting)(struct vcpu *v, bool enable);
+#ifdef CONFIG_VM_EVENT
      void (*set_descriptor_access_exiting)(struct vcpu *v, bool enable);
+    void (*enable_msr_interception)(struct domain *d, uint32_t msr);
+#endif
/* Nested HVM */
      int (*nhvm_vcpu_initialise)(struct vcpu *v);
@@ -224,8 +227,6 @@ struct hvm_function_table {
                                  paddr_t *L1_gpa, unsigned int *page_order,
                                  uint8_t *p2m_acc, struct npfec npfec);
- void (*enable_msr_interception)(struct domain *d, uint32_t msr);
-
  #ifdef CONFIG_ALTP2M
      /* Alternate p2m */
      void (*altp2m_vcpu_update_p2m)(struct vcpu *v);
@@ -433,10 +434,12 @@ static inline bool using_svm(void)
#define hvm_long_mode_active(v) (!!((v)->arch.hvm.guest_efer & EFER_LMA)) +#ifdef CONFIG_VM_EVENT
  static inline bool hvm_has_set_descriptor_access_exiting(void)
  {
      return hvm_funcs.set_descriptor_access_exiting;
  }
+#endif
static inline void hvm_domain_creation_finished(struct domain *d)
  {
@@ -679,10 +682,12 @@ static inline int nhvm_hap_walk_L1_p2m(
          v, L2_gpa, L1_gpa, page_order, p2m_acc, npfec);
  }
+#ifdef CONFIG_VM_EVENT
  static inline void hvm_enable_msr_interception(struct domain *d, uint32_t msr)
  {
      alternative_vcall(hvm_funcs.enable_msr_interception, d, msr);
  }
+#endif
static inline bool hvm_is_singlestep_supported(void)
  {
diff --git a/xen/arch/x86/include/asm/mem_access.h 
b/xen/arch/x86/include/asm/mem_access.h
index 1a52a10322..c786116310 100644
--- a/xen/arch/x86/include/asm/mem_access.h
+++ b/xen/arch/x86/include/asm/mem_access.h
@@ -14,6 +14,7 @@
  #ifndef __ASM_X86_MEM_ACCESS_H__
  #define __ASM_X86_MEM_ACCESS_H__
+#ifdef CONFIG_VM_EVENT
  /*
   * Setup vm_event request based on the access (gla is -1ull if not available).
   * Handles the rw2rx conversion. Boolean return value indicates if event type
@@ -25,6 +26,14 @@
  bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                            struct npfec npfec,
                            struct vm_event_st **req_ptr);
+#else
+static inline bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
+                                        struct npfec npfec,
+                                        struct vm_event_st **req_ptr)
+{
+    return false;
+}
+#endif /* CONFIG_VM_EVENT */
/* Check for emulation and mark vcpu for skipping one instruction
   * upon rescheduling if required. */
diff --git a/xen/arch/x86/include/asm/monitor.h 
b/xen/arch/x86/include/asm/monitor.h
index 3c64d8258f..1cd169f8f0 100644
--- a/xen/arch/x86/include/asm/monitor.h
+++ b/xen/arch/x86/include/asm/monitor.h
@@ -32,6 +32,7 @@ struct monitor_msr_bitmap {
      DECLARE_BITMAP(high, 8192);
  };
+#ifdef COMFIG_VM_EVENT

Typo here causes build to fail.

With ^ fixed and patches 3 and 9 applied, and VM_EVENT=n there are still
build failures, like:

  xen/arch/x86/hvm/svm/svm.c:2757: undefined reference to `hvm_monitor_debug'

In my opinion, It might be reasonable to proceed with this patch (and related 
patches [3,?]) first,
as standalone series, to make VM_EVENT=n work and do MGMT_HYPERCALLS on top of 
it.

This patch, by itself, is big and included in even bigger series - which makes 
hard to
review and track changes.

[..]

--
Best regards,
-grygorii




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.