[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/domain: don't destroy IOREQ servers on soft reset



> -----Original Message-----
> From: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
> Sent: 30 August 2019 12:57
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: jbeulich@xxxxxxxx; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Paul Durrant
> <Paul.Durrant@xxxxxxxxxx>; wl@xxxxxxx; Roger Pau Monne 
> <roger.pau@xxxxxxxxxx>; Igor Druzhinin
> <igor.druzhinin@xxxxxxxxxx>
> Subject: [PATCH] x86/domain: don't destroy IOREQ servers on soft reset
> 
> Performing soft reset should not opportunistically kill IOREQ servers
> for device emulators that might be currently running for a domain.
> Every emulator is supposed to clean up IOREQ servers for itself on exit.
> This allows a toolstack to elect whether or not a particular device
> model should be restarted.
> 
> The original code was introduced in 3235cbfe ("arch-specific hooks for
> domain_soft_reset()") likely due to the fact 'default' IOREQ server
> existed in Xen at the time and used by QEMU didn't have an API call to
> destroy. Since the removal of 'default' IOREQ server from Xen this
> reason has gone away.
> 
> Since commit ba7fdd64b ("xen: cleanup IOREQ server on exit") QEMU now
> destroys IOREQ server for itself as every other device emulator
> is supposed to do. It's now safe to remove this code from soft reset
> path - existing systems with old QEMU should be able to work as
> even if there are IOREQ servers left behind, a new QEMU instance will
> override its ranges anyway.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>

Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

> ---
>  xen/arch/x86/domain.c         | 2 --
>  xen/arch/x86/hvm/hvm.c        | 5 -----
>  xen/include/asm-x86/hvm/hvm.h | 1 -
>  3 files changed, 8 deletions(-)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 2df3123..957f059 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -713,8 +713,6 @@ int arch_domain_soft_reset(struct domain *d)
>      if ( !is_hvm_domain(d) )
>          return -EINVAL;
> 
> -    hvm_domain_soft_reset(d);
> -
>      spin_lock(&d->event_lock);
>      for ( i = 0; i < d->nr_pirqs ; i++ )
>      {
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 029eea3..2b81899 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -5045,11 +5045,6 @@ void hvm_toggle_singlestep(struct vcpu *v)
>      v->arch.hvm.single_step = !v->arch.hvm.single_step;
>  }
> 
> -void hvm_domain_soft_reset(struct domain *d)
> -{
> -    hvm_destroy_all_ioreq_servers(d);
> -}
> -
>  /*
>   * Segment caches in VMCB/VMCS are inconsistent about which bits are checked,
>   * important, and preserved across vmentry/exit.  Cook the values to make 
> them
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index b327bd2..4e72d07 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -240,7 +240,6 @@ extern const struct hvm_function_table *start_vmx(void);
>  int hvm_domain_initialise(struct domain *d);
>  void hvm_domain_relinquish_resources(struct domain *d);
>  void hvm_domain_destroy(struct domain *d);
> -void hvm_domain_soft_reset(struct domain *d);
> 
>  int hvm_vcpu_initialise(struct vcpu *v);
>  void hvm_vcpu_destroy(struct vcpu *v);
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.