[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hvm: finish IOREQ correctly on completion path



> -----Original Message-----
> From: Igor Druzhinin [mailto:igor.druzhinin@xxxxxxxxxx]
> Sent: 08 March 2019 21:31
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; jbeulich@xxxxxxxx; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; Roger Pau Monne 
> <roger.pau@xxxxxxxxxx>;
> Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
> Subject: [PATCH] x86/hvm: finish IOREQ correctly on completion path
> 
> Since the introduction of linear_{read,write}() helpers in 3bdec530a5
> (x86/HVM: split page straddling emulated accesses in more cases) the
> completion path for IOREQs has been broken: if there is an IOREQ in
> progress but hvm_copy_{to,from}_guest_linear() returns HVMTRANS_okay
> (e.g. when P2M type of source/destination has been changed by IOREQ
> handler) the execution will never re-enter hvmemul_do_io() where
> IOREQs are completed. This usually results in a domain crash upon
> the execution of the next IOREQ entering hvmemul_do_io() and finding
> the remnants of the previous IOREQ in the state machine.
> 
> This particular issue has been discovered in relation to p2m_ioreq_server
> type where an emulator changed the memory type between p2m_ioreq_server
> and p2m_ram_rw in process of responding to IOREQ which made hvm_copy_..()
> to behave differently on the way back. But could be also applied
> to a case where e.g. an emulator balloons memory to/from the guest in
> response to MMIO read/write, etc.
> 
> Fix it by checking if IOREQ completion is required before trying to
> finish a memory access immediately through hvm_copy_..(), re-enter
> hvmemul_do_io() otherwise.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/emulate.c | 20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 41aac28..36f8fee 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1080,7 +1080,15 @@ static int linear_read(unsigned long addr, unsigned 
> int bytes, void *p_data,
>                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      pagefault_info_t pfinfo;
> -    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
> +    int rc = HVMTRANS_bad_gfn_to_mfn;
> +
> +    /*
> +     * If the memory access can be handled immediately - do it,
> +     * otherwise re-enter ioreq completion path to properly consume it.
> +     */
> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
> +        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);

I think this is the right thing to do but can we change the text to something 
like:

"If there is pending ioreq then we must be re-issuing an access that was 
previously handed as MMIO. Thus it is imperative that we handle this access in 
the same way to guarantee completion and hence clean up any interim state."

  Paul

> 
>      switch ( rc )
>      {
> @@ -1123,7 +1131,15 @@ static int linear_write(unsigned long addr, unsigned 
> int bytes, void *p_data,
>                          uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      pagefault_info_t pfinfo;
> -    int rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
> +    int rc = HVMTRANS_bad_gfn_to_mfn;
> +
> +    /*
> +     * If the memory access can be handled immediately - do it,
> +     * otherwise re-enter ioreq completion path to properly consume it.
> +     */
> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
> +        rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
> 
>      switch ( rc )
>      {
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.