[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: relinquish resources also from hvm_domain_destroy()



On Tue, Jan 28, 2020 at 02:16:53PM +0100, Jan Beulich wrote:
> Domain creation failure paths don't call domain_relinquish_resources(),
> yet allocations and alike done from hvm_domain_initialize() need to be
> undone nevertheless. Call the function also from hvm_domain_destroy(),
> after making sure all descendants are idempotent.
> 
> Note that while viridian_{domain,vcpu}_deinit() were already used in
> ways suggesting they're idempotent, viridian_time_vcpu_deinit() actually
> wasn't: One can't kill a timer that was never initialized.
> 
> For hvm_destroy_all_ioreq_servers()'s purposes make
> relocate_portio_handler() return whether the to be relocated port range
> was actually found. This seems cheaper than introducing a flag into
> struct hvm_domain's ioreq_server sub-structure.
> 
> In hvm_domain_initialise() additionally
> - use XFREE() also to replace adjacent xfree(),
> - use hvm_domain_relinquish_resources() as being idempotent now.
> 
> Fixes: e7a9b5e72f26 ("viridian: separately allocate domain and vcpu 
> structures")
> Fixes: 26fba3c85571 ("viridian: add implementation of synthetic timers")
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -751,7 +751,7 @@ void hpet_deinit(struct domain *d)
>      int i;
>      HPETState *h = domain_vhpet(d);
>  
> -    if ( !has_vhpet(d) )
> +    if ( !has_vhpet(d) || !d->arch.hvm.pl_time || !h->stime_freq )
>          return;
>  
>      write_lock(&h->lock);
> @@ -763,6 +763,8 @@ void hpet_deinit(struct domain *d)
>          for ( i = 0; i < HPET_TIMER_NUM; i++ )
>              if ( timer_enabled(h, i) )
>                  hpet_stop_timer(h, i, guest_time);
> +
> +        h->hpet.config = 0;
>      }
>  
>      write_unlock(&h->lock);
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -696,24 +696,24 @@ int hvm_domain_initialise(struct domain
>      return 0;
>  
>   fail2:
> -    rtc_deinit(d);
>      stdvga_deinit(d);
>      vioapic_deinit(d);
>   fail1:
>      if ( is_hardware_domain(d) )
>          xfree(d->arch.hvm.io_bitmap);
> -    xfree(d->arch.hvm.io_handler);
> -    xfree(d->arch.hvm.params);
> -    xfree(d->arch.hvm.pl_time);
> -    xfree(d->arch.hvm.irq);
> +    XFREE(d->arch.hvm.io_handler);
> +    XFREE(d->arch.hvm.params);
> +    XFREE(d->arch.hvm.pl_time);
> +    XFREE(d->arch.hvm.irq);
>   fail0:
>      hvm_destroy_cacheattr_region_list(d);
>      destroy_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0);
>   fail:
> -    viridian_domain_deinit(d);
> +    hvm_domain_relinquish_resources(d);

Would be nice to unify all those labels into a single fail label, but
I'm not sure if all functions are prepared to handle this.

>      return rc;
>  }
>  
> +/* This function and all its descendants need to be to be idempotent. */
>  void hvm_domain_relinquish_resources(struct domain *d)
>  {
>      if ( hvm_funcs.domain_relinquish_resources )
> @@ -742,6 +742,13 @@ void hvm_domain_destroy(struct domain *d
>      struct list_head *ioport_list, *tmp;
>      struct g2m_ioport *ioport;
>  
> +    /*
> +     * This function would not be called when domain initialization fails
> +     * (late enough), so do so here. This requires the function and all its
> +     * descendants to be idempotent.
> +     */
> +    hvm_domain_relinquish_resources(d);
> +
>      XFREE(d->arch.hvm.io_handler);
>      XFREE(d->arch.hvm.params);
>  
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1228,6 +1228,9 @@ void hvm_destroy_all_ioreq_servers(struc
>      struct hvm_ioreq_server *s;
>      unsigned int id;
>  
> +    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
> +        return;

Could you add a note here that the 'relocation' is just used as a mean
to figure out if iooreq servers have been setup (ie: the lock has been
initialized)?

I find this kind of dirty, and would need rework if other arches gain
ioreq support.

> +
>      spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>  
>      /* No need to domain_pause() as the domain is being torn down */
> --- a/xen/arch/x86/hvm/intercept.c
> +++ b/xen/arch/x86/hvm/intercept.c
> @@ -300,7 +300,7 @@ void register_portio_handler(struct doma
>      handler->portio.action = action;
>  }
>  
> -void relocate_portio_handler(struct domain *d, unsigned int old_port,
> +bool relocate_portio_handler(struct domain *d, unsigned int old_port,
>                               unsigned int new_port, unsigned int size)
>  {
>      unsigned int i;
> @@ -317,9 +317,11 @@ void relocate_portio_handler(struct doma
>               (handler->portio.size = size) )
>          {
>              handler->portio.port = new_port;
> -            break;
> +            return true;
>          }
>      }
> +
> +    return false;
>  }
>  
>  bool_t hvm_mmio_internal(paddr_t gpa)
> --- a/xen/arch/x86/hvm/pmtimer.c
> +++ b/xen/arch/x86/hvm/pmtimer.c
> @@ -373,7 +373,7 @@ void pmtimer_deinit(struct domain *d)
>  {
>      PMTState *s = &d->arch.hvm.pl_time->vpmt;
>  
> -    if ( !has_vpm(d) )
> +    if ( !has_vpm(d) || !d->arch.hvm.pl_time || !s->vcpu )
>          return;
>  
>      kill_timer(&s->timer);
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -844,7 +844,8 @@ void rtc_deinit(struct domain *d)
>  {
>      RTCState *s = domain_vrtc(d);
>  
> -    if ( !has_vrtc(d) )
> +    if ( !has_vrtc(d) || !d->arch.hvm.pl_time ||
> +         s->update_timer.status == TIMER_STATUS_invalid )

You could also check for the port registration AFAICT, which is maybe
more obvious?

I also wonder whether all those time-related emulations could be
grouped into a single helper, that could have a d->arch.hvm.pl_time
instead of having to sprinkle such checks for each device?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.