[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5] x86/emulate: Send vm_event from emulate



> -----Original Message-----
> From: Alexandru Stefan ISAILA [mailto:aisaila@xxxxxxxxxxxxxxx]
> Sent: 04 June 2019 12:50
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; jbeulich@xxxxxxxx; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>; wl@xxxxxxx; Roger Pau Monne 
> <roger.pau@xxxxxxxxxx>;
> boris.ostrovsky@xxxxxxxxxx; suravee.suthikulpanit@xxxxxxx; 
> brian.woods@xxxxxxx;
> rcojocaru@xxxxxxxxxxxxxxx; tamas@xxxxxxxxxxxxx; jun.nakajima@xxxxxxxxx; Kevin 
> Tian
> <kevin.tian@xxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Tim 
> (Xen.org) <tim@xxxxxxx>;
> Alexandru Stefan ISAILA <aisaila@xxxxxxxxxxxxxxx>
> Subject: [PATCH v5] x86/emulate: Send vm_event from emulate
> 
> This patch aims to have mem access vm events sent from the emulator.
> This is useful where we want to only emulate a page walk without
> checking the EPT, but we still want to check the EPT when emulating
> the instruction that caused the page walk. In this case, the original
> EPT fault is caused by the walk trying to set the accessed or dirty
> bits, but executing the instruction itself might also cause an EPT
> fault if permitted to run, and this second fault should not be lost.
> 
> We use hvmemul_map_linear_addr() to intercept r/w access and
> __hvm_copy() to intercept exec access.
> 
> First we try to send a vm event and if the event is sent then emulation
> returns X86EMUL_RETRY in order to stop emulation on instructions that
> use access protected pages. If the event is not sent then the
> emulation goes on as expected.
> 
> Signed-off-by: Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>

Emulation parts...

Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

...with one nit, inline below...

> 
> ---
> Changes since V4:
>       - Move the exec interception to __hvm_copy()
>       - Remove the page-walk in hvm_emulate_send_vm_event() and get
> the needed address from the existing page walk
>       - Add send_event param to __hvm_copy() and
> hvm_copy_from_guest_linear()
>       - Drop X86EMUL_ACCESS_EXCEPTION and use X86EMUL_RETRY instead.
> ---
>  xen/arch/x86/hvm/emulate.c        | 71 +++++++++++++++++++++++++++++--
>  xen/arch/x86/hvm/hvm.c            | 27 +++++++-----
>  xen/arch/x86/hvm/svm/svm.c        |  2 +-
>  xen/arch/x86/hvm/vm_event.c       |  2 +-
>  xen/arch/x86/hvm/vmx/vvmx.c       |  2 +-
>  xen/arch/x86/mm/mem_access.c      |  3 +-
>  xen/arch/x86/mm/shadow/common.c   |  4 +-
>  xen/arch/x86/mm/shadow/hvm.c      |  2 +-
>  xen/include/asm-x86/hvm/emulate.h |  9 +++-
>  xen/include/asm-x86/hvm/support.h |  2 +-
>  10 files changed, 101 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 8659c89862..9b2d8c2014 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -12,9 +12,11 @@
>  #include <xen/init.h>
>  #include <xen/lib.h>
>  #include <xen/sched.h>
> +#include <xen/monitor.h>
>  #include <xen/paging.h>
>  #include <xen/trace.h>
>  #include <xen/vm_event.h>
> +#include <asm/altp2m.h>
>  #include <asm/event.h>
>  #include <asm/i387.h>
>  #include <asm/xstate.h>
> @@ -530,6 +532,57 @@ static int hvmemul_do_mmio_addr(paddr_t mmio_gpa,
>      return hvmemul_do_io_addr(1, mmio_gpa, reps, size, dir, df, ram_gpa);
>  }
> 
> +bool hvm_emulate_send_vm_event(unsigned long gla, gfn_t gfn,
> +                               uint32_t pfec, bool send_event)
> +{
> +    xenmem_access_t access;
> +    vm_event_request_t req = {};
> +    paddr_t gpa = ((gfn_x(gfn) << PAGE_SHIFT) | (gla & ~PAGE_MASK));
> +
> +    if ( !send_event || !pfec )
> +        return false;
> +
> +    if ( p2m_get_mem_access(current->domain, gfn, &access,
> +                            altp2m_vcpu_idx(current)) != 0 )
> +        return false;
> +
> +    switch ( access ) {
> +    case XENMEM_access_x:
> +    case XENMEM_access_rx:
> +        if ( pfec & PFEC_write_access )
> +            req.u.mem_access.flags = MEM_ACCESS_R | MEM_ACCESS_W;
> +        break;
> +
> +    case XENMEM_access_w:
> +    case XENMEM_access_rw:
> +        if ( pfec & PFEC_insn_fetch )
> +            req.u.mem_access.flags = MEM_ACCESS_X;
> +        break;
> +
> +    case XENMEM_access_r:
> +    case XENMEM_access_n:
> +        if ( pfec & PFEC_write_access )
> +            req.u.mem_access.flags |= MEM_ACCESS_R | MEM_ACCESS_W;
> +        if ( pfec & PFEC_insn_fetch )
> +            req.u.mem_access.flags |= MEM_ACCESS_X;
> +        break;
> +
> +    default:
> +        return false;
> +    }
> +
> +    if ( !req.u.mem_access.flags )
> +        return false; /* no violation */
> +
> +    req.reason = VM_EVENT_REASON_MEM_ACCESS;
> +    req.u.mem_access.gfn = gfn_x(gfn);
> +    req.u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA | 
> MEM_ACCESS_GLA_VALID;
> +    req.u.mem_access.gla = gla;
> +    req.u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);

& ~PAGE_MASK?

> +
> +    return monitor_traps(current, true, &req) >= 0;
> +}
> +
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.