[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 09/28] xen/vm_event: consolidate CONFIG_VM_EVENT


  • To: Penny Zheng <Penny.Zheng@xxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 29 Oct 2025 16:57:30 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: ray.huang@xxxxxxx, oleksii.kurochko@xxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>, Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>, "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 29 Oct 2025 15:57:48 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 13.10.2025 12:15, Penny Zheng wrote:
> File hvm/vm_event.c and x86/vm_event.c are the extend to vm_event handling
> routines, and its compilation shall be guarded by CONFIG_VM_EVENT too.
> Futhermore, features about monitor_op and memory access are both based on
> vm event subsystem, so monitor.o/mem_access.o shall be wrapped under
> CONFIG_VM_EVENT.
> 
> Although CONFIG_VM_EVENT is right now forcibly enabled on x86 via
> MEM_ACCESS_ALWAYS_ON, we could disable it through disabling
> CONFIG_MGMT_HYPERCALLS later. So we remove MEM_ACCESS_ALWAYS_ON and
> make VM_EVENT=y on default only on x86 to retain the same.
> 
> In consequence, a few switch-blocks need in-place stubs in do_altp2m_op()
> to pass compilation when ALTP2M=y and VM_EVENT=n(, hence MEM_ACCESS=n), like
> HVMOP_altp2m_set_mem_access, etc.
> And the following functions still require stubs to pass compilation:
> - vm_event_check_ring()
> - p2m_mem_access_check()
> - xenmem_access_to_p2m_access()
> 
> The following functions are developed on the basis of vm event framework, or
> only invoked by vm_event.c/monitor.c/mem_access.c, so they all shall be
> wrapped with CONFIG_VM_EVENT (otherwise they will become unreachable and
> violate Misra rule 2.1 when VM_EVENT=n):
> - hvm_toggle_singlestep
> - hvm_fast_singlestep
> - hvm_enable_msr_interception
>   - hvm_function_table.enable_msr_interception
> - hvm_has_set_descriptor_access_existing
>   - hvm_function_table.set_descriptor_access_existing
> - arch_monitor_domctl_op
> - arch_monitor_allow_userspace
> - arch_monitor_get_capabilities
> - hvm_emulate_one_vm_event
> - 
> hvmemul_write{,cmpxchg,rep_ins,rep_outs,rep_movs,rep_stos,read_io,write_io}_discard
> 
> Signed-off-by: Penny Zheng <Penny.Zheng@xxxxxxx>

Overall I agree with Grygorii's remark towards this preferably wanting (a) 
splitting
off and perhaps (b) also splitting up some. If at all possible, of course.

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -50,6 +50,7 @@
>  #include <asm/hvm/vm_event.h>
>  #include <asm/hvm/vpt.h>
>  #include <asm/i387.h>
> +#include <asm/mem_access.h>
>  #include <asm/mc146818rtc.h>
>  #include <asm/mce.h>
>  #include <asm/monitor.h>
> @@ -4861,15 +4862,20 @@ static int do_altp2m_op(
>          break;
>  
>      case HVMOP_altp2m_set_mem_access:
> +#ifdef CONFIG_VM_EVENT
>          if ( a.u.mem_access.pad )
>              rc = -EINVAL;
>          else
>              rc = p2m_set_mem_access(d, _gfn(a.u.mem_access.gfn), 1, 0, 0,
>                                      a.u.mem_access.access,
>                                      a.u.mem_access.view);
> +#else
> +        rc = -EOPNOTSUPP;
> +#endif
>          break;

I think this (and if possible the others below here) would better use
IS_ENABLED(). (Would also shrink the diff.)

> @@ -5030,6 +5043,7 @@ static int compat_altp2m_op(
>      switch ( a.cmd )
>      {
>      case HVMOP_altp2m_set_mem_access_multi:
> +#ifdef CONFIG_VM_EVENT
>  #define XLAT_hvm_altp2m_set_mem_access_multi_HNDL_pfn_list(_d_, _s_); \
>          guest_from_compat_handle((_d_)->pfn_list, (_s_)->pfn_list)
>  #define XLAT_hvm_altp2m_set_mem_access_multi_HNDL_access_list(_d_, _s_); \
> @@ -5038,6 +5052,7 @@ static int compat_altp2m_op(
>                                               &a.u.set_mem_access_multi);
>  #undef XLAT_hvm_altp2m_set_mem_access_multi_HNDL_pfn_list
>  #undef XLAT_hvm_altp2m_set_mem_access_multi_HNDL_access_list
> +#endif
>          break;
>  
>      default:
> @@ -5056,6 +5071,7 @@ static int compat_altp2m_op(
>      switch ( a.cmd )
>      {
>      case HVMOP_altp2m_set_mem_access_multi:
> +#ifdef CONFIG_VM_EVENT
>          if ( rc == -ERESTART )
>          {
>              a.u.set_mem_access_multi.opaque =
> @@ -5065,6 +5081,9 @@ static int compat_altp2m_op(
>                                         &a, u.set_mem_access_multi.opaque) )
>                  rc = -EFAULT;
>          }
> +#else
> +        rc = -EOPNOTSUPP;
> +#endif
>          break;
>  
>      default:

Are these changes really needed?

> --- a/xen/arch/x86/include/asm/hvm/hvm.h
> +++ b/xen/arch/x86/include/asm/hvm/hvm.h
> @@ -192,7 +192,10 @@ struct hvm_function_table {
>      void (*handle_cd)(struct vcpu *v, unsigned long value);
>      void (*set_info_guest)(struct vcpu *v);
>      void (*set_rdtsc_exiting)(struct vcpu *v, bool enable);
> +#ifdef CONFIG_VM_EVENT
>      void (*set_descriptor_access_exiting)(struct vcpu *v, bool enable);
> +    void (*enable_msr_interception)(struct domain *d, uint32_t msr);
> +#endif
>  
>      /* Nested HVM */
>      int (*nhvm_vcpu_initialise)(struct vcpu *v);

Another blank line ahead of the #ifdef?

> @@ -433,10 +434,12 @@ static inline bool using_svm(void)
>  
>  #define hvm_long_mode_active(v) (!!((v)->arch.hvm.guest_efer & EFER_LMA))
>  
> +#ifdef CONFIG_VM_EVENT
>  static inline bool hvm_has_set_descriptor_access_exiting(void)
>  {
>      return hvm_funcs.set_descriptor_access_exiting;
>  }
> +#endif
>  
>  static inline void hvm_domain_creation_finished(struct domain *d)
>  {
> @@ -679,10 +682,12 @@ static inline int nhvm_hap_walk_L1_p2m(
>          v, L2_gpa, L1_gpa, page_order, p2m_acc, npfec);
>  }
>  
> +#ifdef CONFIG_VM_EVENT
>  static inline void hvm_enable_msr_interception(struct domain *d, uint32_t 
> msr)
>  {
>      alternative_vcall(hvm_funcs.enable_msr_interception, d, msr);
>  }
> +#endif

Move this up into the earlier #ifdef?

> --- a/xen/arch/x86/include/asm/mem_access.h
> +++ b/xen/arch/x86/include/asm/mem_access.h
> @@ -14,6 +14,7 @@
>  #ifndef __ASM_X86_MEM_ACCESS_H__
>  #define __ASM_X86_MEM_ACCESS_H__
>  
> +#ifdef CONFIG_VM_EVENT
>  /*
>   * Setup vm_event request based on the access (gla is -1ull if not 
> available).
>   * Handles the rw2rx conversion. Boolean return value indicates if event type
> @@ -25,6 +26,14 @@
>  bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>                            struct npfec npfec,
>                            struct vm_event_st **req_ptr);
> +#else
> +static inline bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> +                                        struct npfec npfec,
> +                                        struct vm_event_st **req_ptr)
> +{
> +    return false;

Leaving *req_ptr untouched feels dangerous; the fact that the sole caller has
what it uses set to NULL up front is secondary.

>From looking at the function it's also not quite clear to me whether "false" is
the correct return value here. Tamas?

> --- a/xen/arch/x86/include/asm/monitor.h
> +++ b/xen/arch/x86/include/asm/monitor.h
> @@ -32,6 +32,7 @@ struct monitor_msr_bitmap {
>      DECLARE_BITMAP(high, 8192);
>  };
>  
> +#ifdef COMFIG_VM_EVENT

Typo aside, isn't the entire file (perhaps minus some stubs) useful only when
VM_EVENT=y?

> --- a/xen/include/xen/mem_access.h
> +++ b/xen/include/xen/mem_access.h
> @@ -74,9 +74,19 @@ typedef enum {
>  } p2m_access_t;
>  
>  struct p2m_domain;
> +#ifdef CONFIG_VM_EVENT
>  bool xenmem_access_to_p2m_access(const struct p2m_domain *p2m,
>                                   xenmem_access_t xaccess,
>                                   p2m_access_t *paccess);
> +#else
> +static inline bool xenmem_access_to_p2m_access(const struct p2m_domain *p2m,
> +                                               xenmem_access_t xaccess,
> +                                               p2m_access_t *paccess)
> +{
> +    *paccess = p2m_access_rwx;

Why not p2m->default_access, as the full function has it? And should xaccess
other than XENMEM_access_default be rejected, by returning false? (In turn I
wonder whether the real function may not want to move elsewhere, so that a
stub open-coding part of it wouldn't be necessary.)

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.