|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH for-4.5 v8 4/7] xen: Add vmware_port support
Attempt to send in text. Also attached.
-Don Slutz
--------------------------------------------------------
From 4db1093d0b420cc54258c0db03d991fa3b3acd7f Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@xxxxxxxxxxx>
Date: Thu, 21 Nov 2013 15:01:08 -0500
Subject: [PATCH for-4.5 v7 4/7] xen: Add vmware_port support
This includes adding is_vmware_port_enabled
This is a new domain_create() flag, DOMCRF_vmware_port. It is
passed to domctl as XEN_DOMCTL_CDF_vmware_port.
This enables limited support of VMware's hyper-call.
This is both a more complete support then in currently provided by
QEMU and/or KVM and less. The missing part requires QEMU changes
and has been left out until the QEMU patches are accepted upstream.
VMware's hyper-call is also known as VMware Backdoor I/O Port.
Note: this support does not depend on vmware_hw being non-zero.
Summary is that VMware treats "in (%dx),%eax" (or "out %eax,(%dx)")
to port 0x5658 specially. Note: since many operations return data
in EAX, "in (%dx),%eax" is the one to use. The other lengths like
"in (%dx),%al" will still do things, only AL part of EAX will be
changed. For "out %eax,(%dx)" of all lengths, EAX will remain
unchanged.
Also this instruction is allowed to be used from ring 3. To
support this the vmexit for GP needs to be enabled. I have not
fully tested that nested HVM is doing the right thing for this.
An open source example of using this is:
http://open-vm-tools.sourceforge.net/
Which only uses "inl (%dx)". Also
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009458
The support included is enough to allow VMware tools to install in a
HVM domU.
For AMD (svm) the max instruction length of 15 is hard coded. This
is because __get_instruction_length_from_list() has issues that when
called from #GP handler NRIP is not available, or that NRIP may not
be available at all on a particular HW, leading to the need read the
instruction twice --- once in __get_instruction_length_from_list()
and then again in vmport_gp_check(). Which is bad because memory may
change between the reads.
Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
---
v8:
Switch to _ebx etc.
v7:
More on AMD in the commit message.
Switch to only change 32bit part of registers, what VMware
does.
Too much logging and tracing.
Dropped a lot of it. This includes vmport_debug=
v6:
Dropped the attempt to use svm_nextrip_insn_length via
__get_instruction_length (added in v2). Just always look
at upto 15 bytes on AMD.
v5:
we should make sure that svm_vmexit_gp_intercept is not executed for
any other guest.
Added an ASSERT on is_vmware_port_enabled.
magic integers?
Added #define for them.
I am fairly certain that you need some brackets here.
Added brackets.
xen/arch/x86/domain.c | 2 +
xen/arch/x86/hvm/hvm.c | 4 +
xen/arch/x86/hvm/svm/emulate.c | 2 +-
xen/arch/x86/hvm/svm/svm.c | 30 ++++
xen/arch/x86/hvm/svm/vmcb.c | 2 +
xen/arch/x86/hvm/vmware/Makefile | 1 +
xen/arch/x86/hvm/vmware/vmport.c | 262
++++++++++++++++++++++++++++++++++
xen/arch/x86/hvm/vmx/vmcs.c | 2 + xen/arch/x86/hvm/vmx/vmx.c | 63 +++++++- xen/arch/x86/hvm/vmx/vvmx.c | 3 + xen/common/domctl.c | 3 + xen/include/asm-x86/hvm/domain.h | 3 + xen/include/asm-x86/hvm/io.h | 2 +- xen/include/asm-x86/hvm/svm/emulate.h | 1 + xen/include/asm-x86/hvm/vmport.h | 52 +++++++ xen/include/public/domctl.h | 3 + xen/include/xen/sched.h | 3 + 17 files changed, 433 insertions(+), 5 deletions(-) create mode 100644 xen/arch/x86/hvm/vmware/vmport.c create mode 100644 xen/include/asm-x86/hvm/vmport.h diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 8cfd1ca..a71da52 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c@@ -524,6 +524,8 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags)
d->arch.hvm_domain.mem_sharing_enabled = 0;
d->arch.s3_integrity = !!(domcr_flags & DOMCRF_s3_integrity);
+ d->arch.hvm_domain.is_vmware_port_enabled =
+ !!(domcr_flags & DOMCRF_vmware_port);
INIT_LIST_HEAD(&d->arch.pdev_list);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4039061..1357079 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -61,6 +61,7 @@
#include <asm/hvm/trace.h>
#include <asm/hvm/nestedhvm.h>
#include <asm/hvm/vmware.h>
+#include <asm/hvm/vmport.h>
#include <asm/mtrr.h>
#include <asm/apic.h>
#include <public/sched.h>
@@ -1444,6 +1445,9 @@ int hvm_domain_initialise(struct domain *d)
goto fail1;
d->arch.hvm_domain.io_handler->num_slot = 0;
+ if ( d->arch.hvm_domain.is_vmware_port_enabled )
+ vmport_register(d);
+
if ( is_pvh_domain(d) )
{
register_portio_handler(d, 0, 0x10003, handle_pvh_io);
diff --git a/xen/arch/x86/hvm/svm/emulate.c b/xen/arch/x86/hvm/svm/emulate.c
index 37a1ece..cfad9ab 100644
--- a/xen/arch/x86/hvm/svm/emulate.c
+++ b/xen/arch/x86/hvm/svm/emulate.c
@@ -50,7 +50,7 @@ static unsigned int is_prefix(u8 opc)
return 0;
}
-static unsigned long svm_rip2pointer(struct vcpu *v)
+unsigned long svm_rip2pointer(struct vcpu *v)
{
struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
unsigned long p = vmcb->cs.base + guest_cpu_user_regs()->eip;
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index e3e1565..d7f13d9 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -59,6 +59,7 @@
#include <public/sched.h>
#include <asm/hvm/vpt.h>
#include <asm/hvm/trace.h>
+#include <asm/hvm/vmport.h>
#include <asm/hap.h>
#include <asm/apic.h>
#include <asm/debugger.h>
@@ -2111,6 +2112,31 @@ svm_vmexit_do_vmsave(struct vmcb_struct *vmcb,
return;
}
+static void svm_vmexit_gp_intercept(struct cpu_user_regs *regs,
+ struct vcpu *v)
+{
+ struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+ /*
+ * Just use 15 for the instruction length; vmport_gp_check will
+ * adjust it. This is because
+ * __get_instruction_length_from_list() has issues, and may
+ * require a double read of the instruction bytes. At some
+ * point a new routine could be added that is based on the code
+ * in vmport_gp_check with extensions to make it more general.
+ * Since that routine is the only user of this code this can be
+ * done later.
+ */
+ unsigned long inst_len = 15;
+ unsigned long inst_addr = svm_rip2pointer(v);
+ int rc = vmport_gp_check(regs, v, &inst_len, inst_addr,
+ vmcb->exitinfo1, vmcb->exitinfo2);
+
+ if ( !rc )
+ __update_guest_eip(regs, inst_len);
+ else
+ hvm_inject_hw_exception(TRAP_gp_fault, vmcb->exitinfo1);
+}
+
static void svm_vmexit_ud_intercept(struct cpu_user_regs *regs)
{
struct hvm_emulate_ctxt ctxt;
@@ -2471,6 +2497,10 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
break;
}
+ case VMEXIT_EXCEPTION_GP:
+ svm_vmexit_gp_intercept(regs, v);
+ break;
+
case VMEXIT_EXCEPTION_UD:
svm_vmexit_ud_intercept(regs);
break;
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index 21292bb..45ead61 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -195,6 +195,8 @@ static int construct_vmcb(struct vcpu *v)
HVM_TRAP_MASK
| (1U << TRAP_no_device);
+ if ( v->domain->arch.hvm_domain.is_vmware_port_enabled )
+ vmcb->_exception_intercepts |= 1U << TRAP_gp_fault;
if ( paging_mode_hap(v->domain) )
{
vmcb->_np_enable = 1; /* enable nested paging */
diff --git a/xen/arch/x86/hvm/vmware/Makefile
b/xen/arch/x86/hvm/vmware/Makefile
index 3fb2e0b..cd8815b 100644 --- a/xen/arch/x86/hvm/vmware/Makefile +++ b/xen/arch/x86/hvm/vmware/Makefile @@ -1 +1,2 @@ obj-y += cpuid.o +obj-y += vmport.odiff --git a/xen/arch/x86/hvm/vmware/vmport.c b/xen/arch/x86/hvm/vmware/vmport.c new file mode 100644 index 0000000..183bb7e --- /dev/null +++ b/xen/arch/x86/hvm/vmware/vmport.c @@ -0,0 +1,262 @@ +/* + * HVM VMPORT emulation + * + * Copyright (C) 2012 Verizon Corporation + * + * This file is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License Version 2 (GPLv2) + * as published by the Free Software Foundation. + * + * This file is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. <http://www.gnu.org/licenses/>. + */ + +#include <xen/config.h> +#include <xen/lib.h> +#include <asm/hvm/hvm.h> +#include <asm/hvm/support.h> +#include <asm/hvm/vmport.h> + +#include "backdoor_def.h" + +#define MAX_INST_LEN 15 + +#ifndef NDEBUG +unsigned int opt_vmport_debug __read_mostly; +integer_param("vmport_debug", opt_vmport_debug); +#endif + +/* More VMware defines */ + +#define VMWARE_GUI_AUTO_GRAB 0x001 +#define VMWARE_GUI_AUTO_UNGRAB 0x002 +#define VMWARE_GUI_AUTO_SCROLL 0x004 +#define VMWARE_GUI_AUTO_RAISE 0x008 +#define VMWARE_GUI_EXCHANGE_SELECTIONS 0x010 +#define VMWARE_GUI_WARP_CURSOR_ON_UNGRAB 0x020 +#define VMWARE_GUI_FULL_SCREEN 0x040 + +#define VMWARE_GUI_TO_FULL_SCREEN 0x080 +#define VMWARE_GUI_TO_WINDOW 0x100 + +#define VMWARE_GUI_AUTO_RAISE_DISABLED 0x200 + +#define VMWARE_GUI_SYNC_TIME 0x400 + +/* When set, toolboxes should not show the cursor options page. */ +#define VMWARE_DISABLE_CURSOR_OPTIONS 0x800 + +void vmport_register(struct domain *d) +{ + register_portio_handler(d, BDOOR_PORT, 4, vmport_ioport); +} + +int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val) +{ + struct cpu_user_regs *regs = guest_cpu_user_regs(); + uint16_t cmd = regs->rcx; + int rc = X86EMUL_OKAY; + + if ( regs->_eax == BDOOR_MAGIC ) + { + uint64_t saved_rax = regs->rax; + uint64_t value; + struct vcpu *curr = current; + struct domain *d = curr->domain; + struct segment_register sreg; + + switch ( cmd ) + { + case BDOOR_CMD_GETMHZ: + regs->_eax = d->arch.tsc_khz / 1000; + break; + case BDOOR_CMD_GETVERSION: + /* MAGIC */ + regs->_ebx = BDOOR_MAGIC; + /* VERSION_MAGIC */ + regs->_eax = 6; + /* Claim we are an ESX. VMX_TYPE_SCALABLE_SERVER */ + regs->_ecx = 2; + break; + case BDOOR_CMD_GETSCREENSIZE: + /* We have no screen size */ + regs->_eax = ~0u; + break; + case BDOOR_CMD_GETHWVERSION: + /* vmware_hw */ + regs->_eax = 0; + if ( is_hvm_vcpu(curr) ) + { + struct hvm_domain *hd = &d->arch.hvm_domain; + + regs->_eax = hd->params[HVM_PARAM_VMWARE_HW]; + } + if ( !regs->_eax ) + regs->_eax = 4; /* Act like version 4 */ + break; + case BDOOR_CMD_GETHZ: + hvm_get_segment_register(curr, x86_seg_ss, &sreg); + if ( sreg.attr.fields.dpl == 0 ) + { + value = d->arch.tsc_khz * 1000; + /* apic-frequency (bus speed) */ + regs->_ecx = 1000000000ULL / APIC_BUS_CYCLE_NS; + /* High part of tsc-frequency */ + regs->_ebx = value >> 32; + /* Low part of tsc-frequency */ + regs->_eax = value; + } + break; + case BDOOR_CMD_GETTIME:+ value = get_localtime_us(d) - d->time_offset_seconds * 1000000ULL; + /* hostUsecs */ + regs->_ebx = value % 1000000UL; + /* hostSecs */ + regs->_eax = value / 1000000ULL; + /* maxTimeLag */ + regs->_ecx = 1000000; + /* offset to GMT in minutes */ + regs->_edx = d->time_offset_seconds / 60; + break; + case BDOOR_CMD_GETTIMEFULL:+ value = get_localtime_us(d) - d->time_offset_seconds * 1000000ULL;
+ /* ... */
+ regs->_eax = BDOOR_MAGIC;
+ /* hostUsecs */
+ regs->_ebx =value / 1000000ULL;
+ /* maxTimeLag */
+ regs->_ecx = 1000000;
+ break;
+ case BDOOR_CMD_GETGUIOPTIONS:
+ regs->_eax = VMWARE_GUI_AUTO_GRAB | VMWARE_GUI_AUTO_UNGRAB |
+ VMWARE_GUI_AUTO_RAISE_DISABLED | VMWARE_GUI_SYNC_TIME |
+ VMWARE_DISABLE_CURSOR_OPTIONS;
+ break;
+ case BDOOR_CMD_SETGUIOPTIONS:
+ regs->_eax = 0x0;
+ break;
+ default:
+ regs->_eax = ~0u;
+ break;
+ }
+ if ( dir == IOREQ_READ )
+ {
+ switch ( bytes )
+ {
+ case 1:
+ regs->rax = (saved_rax & 0xffffff00) | (regs->rax & 0xff);
+ break;
+ case 2:
+ regs->rax = (saved_rax & 0xffff0000) | (regs->rax &
0xffff);
+ break;
+ case 4:
+ regs->rax = regs->_eax;
+ break;
+ }
+ *val = regs->rax;
+ }
+ else
+ regs->rax = saved_rax;
+ }
+ else
+ rc = X86EMUL_UNHANDLEABLE;
+
+ return rc;
+}
+
+int vmport_gp_check(struct cpu_user_regs *regs, struct vcpu *v,
+ unsigned long *inst_len, unsigned long inst_addr,
+ unsigned long ei1, unsigned long ei2)
+{
+ if ( !v->domain->arch.hvm_domain.is_vmware_port_enabled )
+ return X86EMUL_VMPORT_NOT_ENABLED;
+
+ if ( *inst_len && *inst_len <= MAX_INST_LEN &&
+ (regs->rdx & 0xffff) == BDOOR_PORT && ei1 == 0 && ei2 == 0 &&
+ regs->_eax == BDOOR_MAGIC )
+ {
+ int i = 0;
+ uint32_t val;
+ uint32_t byte_cnt = hvm_guest_x86_mode(v);
+ unsigned char bytes[MAX_INST_LEN];
+ unsigned int fetch_len;
+ int frc;
+
+ /* in or out are limited to 32bits */
+ if ( byte_cnt > 4 )
+ byte_cnt = 4;
+
+ /*
+ * Fetch up to the next page break; we'll fetch from the
+ * next page later if we have to.
+ */
+ fetch_len = min_t(unsigned int, *inst_len,
+ PAGE_SIZE - (inst_addr & ~PAGE_MASK));
+ frc = hvm_fetch_from_guest_virt_nofault(bytes, inst_addr,
fetch_len,
+ PFEC_page_present);
+ if ( frc != HVMCOPY_okay )
+ {
+ gdprintk(XENLOG_WARNING,
+ "Bad instruction fetch at %#lx (frc=%d il=%lu
fl=%u)\n",
+ (unsigned long) inst_addr, frc, *inst_len, fetch_len);
+ return X86EMUL_VMPORT_FETCH_ERROR_BYTE1;
+ }
+
+ /* Check for operand size prefix */
+ while ( (i < MAX_INST_LEN) && (bytes[i] == 0x66) )
+ {
+ i++;
+ if ( i >= fetch_len )
+ {
+ frc = hvm_fetch_from_guest_virt_nofault(
+ &bytes[fetch_len], inst_addr + fetch_len,
+ MAX_INST_LEN - fetch_len, PFEC_page_present);
+ if ( frc != HVMCOPY_okay )
+ {
+ gdprintk(XENLOG_WARNING,
+ "Bad instruction fetch at %#lx + %#x
(frc=%d)\n",
+ inst_addr, fetch_len, frc);
+ return X86EMUL_VMPORT_FETCH_ERROR_BYTE2;
+ }
+ fetch_len = MAX_INST_LEN;
+ }
+ }
+ *inst_len = i + 1;
+
+ /* Only adjust byte_cnt 1 time */
+ if ( bytes[0] == 0x66 ) /* operand size prefix */
+ {
+ if ( byte_cnt == 4 )
+ byte_cnt = 2;
+ else
+ byte_cnt = 4;
+ }
+ if ( bytes[i] == 0xed ) /* in (%dx),%eax or in (%dx),%ax */
+ return vmport_ioport(IOREQ_READ, BDOOR_PORT, byte_cnt, &val);
+ else if ( bytes[i] == 0xec ) /* in (%dx),%al */
+ return vmport_ioport(IOREQ_READ, BDOOR_PORT, 1, &val);
+ else if ( bytes[i] == 0xef ) /* out %eax,(%dx) or out
%ax,(%dx) */
+ return vmport_ioport(IOREQ_WRITE, BDOOR_PORT, byte_cnt, &val);
+ else if ( bytes[i] == 0xee ) /* out %al,(%dx) */
+ return vmport_ioport(IOREQ_WRITE, BDOOR_PORT, 1, &val);
+ else
+ {
+ *inst_len = 0; /* This is unknown. */
+ return X86EMUL_VMPORT_BAD_OPCODE;
+ }
+ }
+ *inst_len = 0; /* This is unknown. */
+ return X86EMUL_VMPORT_BAD_STATE;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9d8033e..1bab216 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1102,6 +1102,8 @@ static int construct_vmcs(struct vcpu *v)
v->arch.hvm_vmx.exception_bitmap = HVM_TRAP_MASK
| (paging_mode_hap(d) ? 0 : (1U << TRAP_page_fault))
+ | (v->domain->arch.hvm_domain.is_vmware_port_enabled ?
+ (1U << TRAP_gp_fault) : 0)
| (1U << TRAP_no_device);
vmx_update_exception_bitmap(v);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 304aeea..300d804 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -44,6 +44,7 @@
#include <asm/hvm/support.h>
#include <asm/hvm/vmx/vmx.h>
#include <asm/hvm/vmx/vmcs.h>
+#include <asm/hvm/vmport.h>
#include <public/sched.h>
#include <public/hvm/ioreq.h>
#include <asm/hvm/vpic.h>
@@ -1276,9 +1277,11 @@ static void vmx_update_guest_cr(struct vcpu *v,
unsigned int cr)
vmx_set_segment_register(
v, s, &v->arch.hvm_vmx.vm86_saved_seg[s]);
v->arch.hvm_vmx.exception_bitmap = HVM_TRAP_MASK
- | (paging_mode_hap(v->domain) ?
- 0 : (1U << TRAP_page_fault))
- | (1U << TRAP_no_device);
+ | (paging_mode_hap(v->domain) ?
+ 0 : (1U << TRAP_page_fault))
+ | (v->domain->arch.hvm_domain.is_vmware_port_enabled ?
+ (1U << TRAP_gp_fault) : 0)
+ | (1U << TRAP_no_device);
vmx_update_exception_bitmap(v);
vmx_update_debug_state(v);
}
@@ -2589,6 +2592,57 @@ static void vmx_idtv_reinject(unsigned long
idtv_info)
}
}
+static unsigned long vmx_rip2pointer(struct cpu_user_regs *regs,
+ struct vcpu *v)
+{
+ struct segment_register cs;
+ unsigned long p;
+
+ vmx_get_segment_register(v, x86_seg_cs, &cs);
+ p = cs.base + regs->rip;
+ if ( !(cs.attr.fields.l && hvm_long_mode_enabled(v)) )
+ return (uint32_t)p; /* mask to 32 bits */
+ return p;
+}
+
+static void vmx_vmexit_gp_intercept(struct cpu_user_regs *regs,
+ struct vcpu *v)
+{
+ unsigned long exit_qualification;
+ unsigned long inst_len;
+ unsigned long inst_addr = vmx_rip2pointer(regs, v);
+ unsigned long ecode;
+ int rc;
+#ifndef NDEBUG
+ unsigned long orig_inst_len;
+ unsigned long vector;
+
+ __vmread(VM_EXIT_INTR_INFO, &vector);
+ BUG_ON(!(vector & INTR_INFO_VALID_MASK));
+ BUG_ON(!(vector & INTR_INFO_DELIVER_CODE_MASK));
+#endif
+
+ __vmread(EXIT_QUALIFICATION, &exit_qualification);
+ __vmread(VM_EXIT_INSTRUCTION_LEN, &inst_len);
+ __vmread(VM_EXIT_INTR_ERROR_CODE, &ecode);
+
+#ifndef NDEBUG
+ orig_inst_len = inst_len;
+#endif
+ rc = vmport_gp_check(regs, v, &inst_len, inst_addr,
+ ecode, exit_qualification);
+#ifndef NDEBUG
+ if ( inst_len && orig_inst_len != inst_len )
+ gdprintk(XENLOG_WARNING,
+ "Unexpected instruction length difference: %lu vs %lu\n",
+ orig_inst_len, inst_len);
+#endif
+ if ( !rc )
+ update_guest_eip();
+ else
+ hvm_inject_hw_exception(TRAP_gp_fault, ecode);
+}
+
static int vmx_handle_apic_write(void)
{
unsigned long exit_qualification;
@@ -2814,6 +2868,9 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
HVMTRACE_1D(TRAP, vector);
vmx_fpu_dirty_intercept();
break;
+ case TRAP_gp_fault:
+ vmx_vmexit_gp_intercept(regs, v);
+ break;
case TRAP_page_fault:
__vmread(EXIT_QUALIFICATION, &exit_qualification);
__vmread(VM_EXIT_INTR_ERROR_CODE, &ecode);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 9ccc03f..8e07f92 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -24,6 +24,7 @@
#include <asm/types.h>
#include <asm/mtrr.h>
#include <asm/p2m.h>
+#include <asm/hvm/vmport.h>
#include <asm/hvm/vmx/vmx.h>
#include <asm/hvm/vmx/vvmx.h>
#include <asm/hvm/nestedhvm.h>
@@ -2182,6 +2183,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
if ( v->fpu_dirtied )
nvcpu->nv_vmexit_pending = 1;
}
+ else if ( vector == TRAP_gp_fault )
+ nvcpu->nv_vmexit_pending = 1;
else if ( (intr_info & valid_mask) == valid_mask )
{
exec_bitmap =__get_vvmcs(nvcpu->nv_vvmcx, EXCEPTION_BITMAP);
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 30c9e50..fad55a2 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -543,6 +543,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
u_domctl)
~(XEN_DOMCTL_CDF_hvm_guest
| XEN_DOMCTL_CDF_pvh_guest
| XEN_DOMCTL_CDF_hap
+ | XEN_DOMCTL_CDF_vmware_port
| XEN_DOMCTL_CDF_s3_integrity
| XEN_DOMCTL_CDF_oos_off)) )
break;
@@ -586,6 +587,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
u_domctl)
domcr_flags |= DOMCRF_s3_integrity;
if ( op->u.createdomain.flags & XEN_DOMCTL_CDF_oos_off )
domcr_flags |= DOMCRF_oos_off;
+ if ( op->u.createdomain.flags & XEN_DOMCTL_CDF_vmware_port )
+ domcr_flags |= DOMCRF_vmware_port;
d = domain_create(dom, domcr_flags, op->u.createdomain.ssidref);
if ( IS_ERR(d) )
diff --git a/xen/include/asm-x86/hvm/domain.h
b/xen/include/asm-x86/hvm/domain.h
index 2757c7f..d4718df 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -121,6 +121,9 @@ struct hvm_domain {
spinlock_t uc_lock;
bool_t is_in_uc_mode;
+ /* VMware backdoor port available */
+ bool_t is_vmware_port_enabled;
+
/* Pass-through */
struct hvm_iommu hvm_iommu;
diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
index 886a9d6..d257161 100644
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -25,7 +25,7 @@
#include <public/hvm/ioreq.h>
#include <public/event_channel.h>
-#define MAX_IO_HANDLER 16
+#define MAX_IO_HANDLER 17
#define HVM_PORTIO 0
#define HVM_BUFFERED_IO 2
diff --git a/xen/include/asm-x86/hvm/svm/emulate.h
b/xen/include/asm-x86/hvm/svm/emulate.h
index ccc2d3c..d9a9dc5 100644
--- a/xen/include/asm-x86/hvm/svm/emulate.h
+++ b/xen/include/asm-x86/hvm/svm/emulate.h
@@ -44,6 +44,7 @@ enum instruction_index {
struct vcpu;
+unsigned long svm_rip2pointer(struct vcpu *v);
int __get_instruction_length_from_list(
struct vcpu *, const enum instruction_index *, unsigned int
list_count);
diff --git a/xen/include/asm-x86/hvm/vmport.h b/xen/include/asm-x86/hvm/vmport.h new file mode 100644 index 0000000..d037d55 --- /dev/null +++ b/xen/include/asm-x86/hvm/vmport.h @@ -0,0 +1,52 @@ +/* + * asm/hvm/vmport.h: HVM VMPORT emulation + * + * + * Copyright (C) 2012 Verizon Corporation + * + * This file is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License Version 2 (GPLv2) + * as published by the Free Software Foundation. + * + * This file is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. <http://www.gnu.org/licenses/>. + */ + +#ifndef ASM_X86_HVM_VMPORT_H__ +#define ASM_X86_HVM_VMPORT_H__ + +void vmport_register(struct domain *d); +int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val); +int vmport_gp_check(struct cpu_user_regs *regs, struct vcpu *v, + unsigned long *inst_len, unsigned long inst_addr, + unsigned long ei1, unsigned long ei2); +/* + * Additional return values from vmport_gp_check. + * + * Note: return values include: + * X86EMUL_OKAY + * X86EMUL_UNHANDLEABLE + * X86EMUL_EXCEPTION + * X86EMUL_RETRY + * X86EMUL_CMPXCHG_FAILED + * + * The additional do not overlap any of the above. + */ +#define X86EMUL_VMPORT_NOT_ENABLED 10 +#define X86EMUL_VMPORT_FETCH_ERROR_BYTE1 11 +#define X86EMUL_VMPORT_FETCH_ERROR_BYTE2 12 +#define X86EMUL_VMPORT_BAD_OPCODE 13 +#define X86EMUL_VMPORT_BAD_STATE 14 + +#endif /* ASM_X86_HVM_VMPORT_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 61f7555..2b38515 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -63,6 +63,9 @@ struct xen_domctl_createdomain { /* Is this a PVH guest (as opposed to an HVM or PV guest)? */ #define _XEN_DOMCTL_CDF_pvh_guest 4 #define XEN_DOMCTL_CDF_pvh_guest (1U<<_XEN_DOMCTL_CDF_pvh_guest) + /* Is VMware backdoor port available? */ +#define _XEN_DOMCTL_CDF_vmware_port 5 +#define XEN_DOMCTL_CDF_vmware_port (1U<<_XEN_DOMCTL_CDF_vmware_port) uint32_t flags; }; typedef struct xen_domctl_createdomain xen_domctl_createdomain_t; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index c5157e6..d741978 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -546,6 +546,9 @@ struct domain *domain_create( /* DOMCRF_pvh: Create PV domain in HVM container. */ #define _DOMCRF_pvh 5 #define DOMCRF_pvh (1U<<_DOMCRF_pvh) + /* DOMCRF_vmware_port: Enable use of vmware backdoor port. */ +#define _DOMCRF_vmware_port 6 +#define DOMCRF_vmware_port (1U<<_DOMCRF_vmware_port) /* * rcu_lock_domain_by_id() is more efficient than get_domain_by_id(). -- 1.8.4 Attachment:
0004-xen-Add-vmware_port-support.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |