[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] x86/HVM: suppress I/O completion for port output



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 09 April 2018 14:24
> To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Paul Durrant
> <Paul.Durrant@xxxxxxxxxx>; Juergen Gross <jgross@xxxxxxxx>
> Subject: [PATCH 1/2] x86/HVM: suppress I/O completion for port output
> 
> We don't break up port requests in case they cross emulation entity
> boundaries, and a write to an I/O port is necessarily the last
> operation of an instruction instance, so there's no need to re-invoke
> the full emulation path upon receiving the result from an external
> emulator.
> 
> In case we want to properly split port accesses in the future, this
> change will need to be reverted, as it would prevent things working
> correctly when e.g. the first part needs to go to an external emulator,
> while the second part is to be handled internally.
> 
> While this addresses the reported problem of Windows paging out the
> buffer underneath an in-process REP OUTS, it does not address the wider
> problem of the re-issued insn (to the insn emulator) being prone to
> raise an exception (#PF) during a replayed, previously successful memory
> access (we only record prior MMIO accesses).
> 
> Leaving aside the problem tried to be worked around here, I think the
> performance aspect alone is a good reason to change the behavior.
> 
> Also take the opportunity and change bool_t -> bool as
> hvm_vcpu_io_need_completion()'s return type.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -282,7 +282,11 @@ static int hvmemul_do_io(
>              rc = hvm_send_ioreq(s, &p, 0);
>              if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
>                  vio->io_req.state = STATE_IOREQ_NONE;
> -            else if ( data_is_addr )
> +            /*
> +             * This effectively is !hvm_vcpu_io_need_completion(vio), 
> slightly
> +             * optimized and using local variables we have available.
> +             */
> +            else if ( data_is_addr || (!is_mmio && dir == IOREQ_WRITE) )
>                  rc = X86EMUL_OKAY;
>          }
>          break;
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -91,10 +91,12 @@ struct hvm_vcpu_io {
>      const struct g2m_ioport *g2m_ioport;
>  };
> 
> -static inline bool_t hvm_vcpu_io_need_completion(const struct
> hvm_vcpu_io *vio)
> +static inline bool hvm_vcpu_io_need_completion(const struct
> hvm_vcpu_io *vio)
>  {
>      return (vio->io_req.state == STATE_IOREQ_READY) &&
> -           !vio->io_req.data_is_ptr;
> +           !vio->io_req.data_is_ptr &&
> +           (vio->io_req.type != IOREQ_TYPE_PIO ||
> +            vio->io_req.dir != IOREQ_WRITE);
>  }
> 
>  struct nestedvcpu {
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.