[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 14/16] x86/monitor: clarify separation between monitor subsys and vm-event as a whole



On 7/9/2016 9:26 PM, Tamas K Lengyel wrote:
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ae1dcb4..7663da2 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -9,6 +9,7 @@
  #include <asm/e820.h>
  #include <asm/mce.h>
  #include <public/vcpu.h>
+#include <public/vm_event.h>
  #include <public/hvm/hvm_info_table.h>

  #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
@@ -503,6 +504,20 @@ typedef enum __packed {
      SMAP_CHECK_DISABLED,        /* disable the check */
  } smap_check_policy_t;

+/*
+ * Should we emulate the next matching instruction on VCPU resume
+ * after a vm_event?
+ */
+struct arch_vm_event_monitor {
This should be named struct arch_vcpu_monitor.

Good idea.


+    uint32_t emulate_flags;
+    struct vm_event_emul_read_data emul_read_data;
This should probably get renamed as well at some point to struct
monitor_emul_read_data.

Ack.

+    struct monitor_write_data write_data;
+};
+
+struct arch_vm_event {
+    struct arch_vm_event_monitor *monitor;
+};
IMHO there is not much point in defining struct arch_vm_event this
way, we could just as well store the pointer to the arch_monitor
directly in arch_vcpu as we do right now.


I stated the reason for that in the commit message (see 3rd '*'), Jan insists it would be preferable to occupy just one pointer in arch_vcpu (which would still hold if we do as you suggest but I was wondering how probable would it be in the future for one of the paging/share vm-event subsystems to need per-vCPU resources). But personally, yeah, I would have preferred what you suggest as well..

Zuzu.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.