[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] domctl: perform initial post-XSA-77 auditing



On 30/04/14 15:23, Jan Beulich wrote:
In a number of cases, loops over each vCPU in a domain are involved
here. For large numbers of vCPU-s these may still take some time to
complete, but we're limiting them at a couple of thousand at most, so I
would think this should not by itself be an issue. I wonder though
whether it shouldn't be possible to have XSM restrict the vCPU count
that can be set through XEN_DOMCTL_max_vcpus.

XEN_DOMCTL_pausedomain:

 A loop over vcpu_sleep_sync() for each of vCPU in the domain. That
 function itself has a loop waiting for the subject vCPU to become non-
 runnable, which ought to complete quickly (involving an IPI to be sent
 and acted on). No other unbounded resource usage.

XEN_DOMCTL_unpausedomain:

 Simply a loop calling vcpu_wake() (not having any loops or other
 resource usage itself) for each of vCPU in the domain.

XEN_DOMCTL_getdomaininfo:

 Two loops (one over all domains, i.e. bounded by the limit of 32k
 domains, and another over all vCPU-s in the domain); no other
 unbounded resource usage.

XEN_DOMCTL_getpageframeinfo:

 Inquiring just a single MFN, i.e. no loops and no other unbounded
 resource usage.

XEN_DOMCTL_getpageframeinfo{2,3}:

 Number of inquired MFNs is limited to 1024. Beyond that just like
 XEN_DOMCTL_getpageframeinfo.

XEN_DOMCTL_getvcpuinfo:

 Only obtaining information on the vCPU, no loops or other resource
 usage.

XEN_DOMCTL_setdomainhandle:

 Simply a memcpy() of a very limited amount of data.

XEN_DOMCTL_setdebugging:

 A domain_{,un}pause() pair (see XEN_DOMCTL_{,un}pausedomain) framing
 the setting of a flag.

XEN_DOMCTL_hypercall_init:

 Initializing a guest provided page with hypercall stubs. No other
 resource consumption.

XEN_DOMCTL_arch_setup:

 IA64 leftover, interface structure being removed from the public
 header.

XEN_DOMCTL_settimeoffset:

 Setting a couple of guest state fields. No other resource consumption.

XEN_DOMCTL_getvcpuaffinity:
XEN_DOMCTL_getnodeaffinity:

 Involve temporary memory allocations (approximately) bounded by the
 number of CPUs in the system / number of nodes built for, which is
 okay. Beyond that trivial operation.

XEN_DOMCTL_real_mode_area:

 PPC leftover, interface structure being removed from the public
 header.

XEN_DOMCTL_resumedomain:

 A domain_{,un}pause() pair framing operation very similar to
 XEN_DOMCTL_unpausedomain (see above).

XEN_DOMCTL_sendtrigger:

 Injects an interrupt (SCI or NMI) without any other resource
 consumption.

XEN_DOMCTL_subscribe:

 Updates the suspend event channel, i.e. affecting only the controlled
 domain.

XEN_DOMCTL_disable_migrate:
XEN_DOMCTL_suppress_spurious_page_faults:

 Just setting respective flags on the domain.

XEN_DOMCTL_get_address_size:

 Simply reading the guest property.

XEN_DOMCTL_set_opt_feature:

 Was already tagged IA64-only.

XEN_DOMCTL_set_cpuid:

 MAX_CPUID_INPUT bounded loop, which is okay. No other resource
 consumption.

XEN_DOMCTL_get_machine_address_size:

 Simply obtaining the value set by XEN_DOMCTL_set_machine_address_size
 (or the default set at domain creation time).

XEN_DOMCTL_gettscinfo:
XEN_DOMCTL_settscinfo:

 Reading/writing of a couple of guest state fields wrapped in a
 domain_{,un}pause() pair.

XEN_DOMCTL_audit_p2m:

 Enabled only in debug builds.

XEN_DOMCTL_set_max_evtchn:

 While the limit set here implies other (subsequent) resource usage,
 this is the purpose of the operation.

I also verified that all removed domctls' handlers don't leak
hypervisor memory contents .

Inspected but questionable (and hence left in place for now):

XEN_DOMCTL_max_mem:

 While only setting the field capping a domain's allocation (this
 implies potential successive resource usage, but that's the purpose of
 the operation). However, XSM doesn't see the value that's being set
 here, so the net effect would be potential unbounded memory use.

It is possibly a good thing for dom0/etc to set a limit which domain-builder domains can't exceed.


XEN_DOMCTL_set_virq_handler:

 This modifies a global array. While that is the purpose of the
 operation, if multiple domains are granted permission they can badly
 interfere with one another. Hence I'd appreciate a second opinion
 here.

Do you mean by domains nabbing each others ownership of global virqs?  Global virqs are all system level things.  They are not needed for disaggregated domain-builder domains or device-model domains.

If the core toolstack has been disaggregated to the point at which virqs are going to different domains, and you can't trust those domains to play nicely together, there are far larger problems than can be done by this hypercall.

I think it is fine as-is.


Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>


--- a/docs/misc/xsm-flask.txt
+++ b/docs/misc/xsm-flask.txt
@@ -64,67 +64,39 @@ __HYPERVISOR_domctl (xen/include/public/
 
  * XEN_DOMCTL_createdomain
  * XEN_DOMCTL_destroydomain
- * XEN_DOMCTL_pausedomain
- * XEN_DOMCTL_unpausedomain
- * XEN_DOMCTL_getdomaininfo
  * XEN_DOMCTL_getmemlist
- * XEN_DOMCTL_getpageframeinfo
- * XEN_DOMCTL_getpageframeinfo2
  * XEN_DOMCTL_setvcpuaffinity
  * XEN_DOMCTL_shadow_op
  * XEN_DOMCTL_max_mem
  * XEN_DOMCTL_setvcpucontext
  * XEN_DOMCTL_getvcpucontext
- * XEN_DOMCTL_getvcpuinfo
  * XEN_DOMCTL_max_vcpus
  * XEN_DOMCTL_scheduler_op
- * XEN_DOMCTL_setdomainhandle
- * XEN_DOMCTL_setdebugging
  * XEN_DOMCTL_irq_permission
  * XEN_DOMCTL_iomem_permission
  * XEN_DOMCTL_ioport_permission
- * XEN_DOMCTL_hypercall_init
- * XEN_DOMCTL_arch_setup
- * XEN_DOMCTL_settimeoffset
- * XEN_DOMCTL_getvcpuaffinity
- * XEN_DOMCTL_real_mode_area
- * XEN_DOMCTL_resumedomain
- * XEN_DOMCTL_sendtrigger
- * XEN_DOMCTL_subscribe
  * XEN_DOMCTL_gethvmcontext
  * XEN_DOMCTL_sethvmcontext
  * XEN_DOMCTL_set_address_size
- * XEN_DOMCTL_get_address_size
  * XEN_DOMCTL_assign_device
  * XEN_DOMCTL_pin_mem_cacheattr
  * XEN_DOMCTL_set_ext_vcpucontext
  * XEN_DOMCTL_get_ext_vcpucontext
- * XEN_DOMCTL_set_opt_feature
  * XEN_DOMCTL_test_assign_device
  * XEN_DOMCTL_set_target
  * XEN_DOMCTL_deassign_device
- * XEN_DOMCTL_set_cpuid
  * XEN_DOMCTL_get_device_group
  * XEN_DOMCTL_set_machine_address_size
- * XEN_DOMCTL_get_machine_address_size
- * XEN_DOMCTL_suppress_spurious_page_faults
  * XEN_DOMCTL_debug_op
  * XEN_DOMCTL_gethvmcontext_partial
  * XEN_DOMCTL_mem_event_op
  * XEN_DOMCTL_mem_sharing_op
- * XEN_DOMCTL_disable_migrate
- * XEN_DOMCTL_gettscinfo
- * XEN_DOMCTL_settscinfo
- * XEN_DOMCTL_getpageframeinfo3
  * XEN_DOMCTL_setvcpuextstate
  * XEN_DOMCTL_getvcpuextstate
  * XEN_DOMCTL_set_access_required
- * XEN_DOMCTL_audit_p2m
  * XEN_DOMCTL_set_virq_handler
  * XEN_DOMCTL_set_broken_page_p2m
  * XEN_DOMCTL_setnodeaffinity
- * XEN_DOMCTL_getnodeaffinity
- * XEN_DOMCTL_set_max_evtchn
  * XEN_DOMCTL_gdbsx_guestmemio
  * XEN_DOMCTL_gdbsx_pausevcpu
  * XEN_DOMCTL_gdbsx_unpausevcpu
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -401,19 +401,6 @@ typedef struct xen_domctl_hypercall_init
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_hypercall_init_t);
 
 
-/* XEN_DOMCTL_arch_setup */
-#define _XEN_DOMAINSETUP_hvm_guest 0
-#define XEN_DOMAINSETUP_hvm_guest  (1UL<<_XEN_DOMAINSETUP_hvm_guest)
-#define _XEN_DOMAINSETUP_query 1 /* Get parameters (for save)  */
-#define XEN_DOMAINSETUP_query  (1UL<<_XEN_DOMAINSETUP_query)
-#define _XEN_DOMAINSETUP_sioemu_guest 2
-#define XEN_DOMAINSETUP_sioemu_guest  (1UL<<_XEN_DOMAINSETUP_sioemu_guest)
-typedef struct xen_domctl_arch_setup {
-    uint64_aligned_t flags;  /* XEN_DOMAINSETUP_* */
-} xen_domctl_arch_setup_t;
-DEFINE_XEN_GUEST_HANDLE(xen_domctl_arch_setup_t);
-
-
 /* XEN_DOMCTL_settimeoffset */
 struct xen_domctl_settimeoffset {
     int32_t  time_offset_seconds; /* applied to domain wallclock time */
@@ -440,14 +427,6 @@ typedef struct xen_domctl_address_size {
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_address_size_t);
 
 
-/* XEN_DOMCTL_real_mode_area */
-struct xen_domctl_real_mode_area {
-    uint32_t log; /* log2 of Real Mode Area size */
-};
-typedef struct xen_domctl_real_mode_area xen_domctl_real_mode_area_t;
-DEFINE_XEN_GUEST_HANDLE(xen_domctl_real_mode_area_t);
-
-
 /* XEN_DOMCTL_sendtrigger */
 #define XEN_DOMCTL_SENDTRIGGER_NMI    0
 #define XEN_DOMCTL_SENDTRIGGER_RESET  1
@@ -940,10 +919,10 @@ struct xen_domctl {
 #define XEN_DOMCTL_iomem_permission              20
 #define XEN_DOMCTL_ioport_permission             21
 #define XEN_DOMCTL_hypercall_init                22
-#define XEN_DOMCTL_arch_setup                    23
+#define XEN_DOMCTL_arch_setup                    23 /* Obsolete IA64 only */
 #define XEN_DOMCTL_settimeoffset                 24
 #define XEN_DOMCTL_getvcpuaffinity               25
-#define XEN_DOMCTL_real_mode_area                26
+#define XEN_DOMCTL_real_mode_area                26 /* Obsolete PPC only */
 #define XEN_DOMCTL_resumedomain                  27
 #define XEN_DOMCTL_sendtrigger                   28
 #define XEN_DOMCTL_subscribe                     29
@@ -1013,11 +992,9 @@ struct xen_domctl {
         struct xen_domctl_iomem_permission  iomem_permission;
         struct xen_domctl_ioport_permission ioport_permission;
         struct xen_domctl_hypercall_init    hypercall_init;
-        struct xen_domctl_arch_setup        arch_setup;
         struct xen_domctl_settimeoffset     settimeoffset;
         struct xen_domctl_disable_migrate   disable_migrate;
         struct xen_domctl_tsc_info          tsc_info;
-        struct xen_domctl_real_mode_area    real_mode_area;
         struct xen_domctl_hvmcontext        hvmcontext;
         struct xen_domctl_hvmcontext_partial hvmcontext_partial;
         struct xen_domctl_address_size      address_size;




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.