[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: relinquish resources also from hvm_domain_destroy()



On Tue, Jan 28, 2020 at 04:49:09PM +0100, Jan Beulich wrote:
> On 28.01.2020 15:54, Roger Pau Monné wrote:
> > On Tue, Jan 28, 2020 at 02:16:53PM +0100, Jan Beulich wrote:
> >> Domain creation failure paths don't call domain_relinquish_resources(),
> >> yet allocations and alike done from hvm_domain_initialize() need to be
> >> undone nevertheless. Call the function also from hvm_domain_destroy(),
> >> after making sure all descendants are idempotent.
> >>
> >> Note that while viridian_{domain,vcpu}_deinit() were already used in
> >> ways suggesting they're idempotent, viridian_time_vcpu_deinit() actually
> >> wasn't: One can't kill a timer that was never initialized.
> >>
> >> For hvm_destroy_all_ioreq_servers()'s purposes make
> >> relocate_portio_handler() return whether the to be relocated port range
> >> was actually found. This seems cheaper than introducing a flag into
> >> struct hvm_domain's ioreq_server sub-structure.
> >>
> >> In hvm_domain_initialise() additionally
> >> - use XFREE() also to replace adjacent xfree(),
> >> - use hvm_domain_relinquish_resources() as being idempotent now.
> >>
> >> Fixes: e7a9b5e72f26 ("viridian: separately allocate domain and vcpu 
> >> structures")
> >> Fixes: 26fba3c85571 ("viridian: add implementation of synthetic timers")
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >>
> >> --- a/xen/arch/x86/hvm/hpet.c
> >> +++ b/xen/arch/x86/hvm/hpet.c
> >> @@ -751,7 +751,7 @@ void hpet_deinit(struct domain *d)
> >>      int i;
> >>      HPETState *h = domain_vhpet(d);
> >>  
> >> -    if ( !has_vhpet(d) )
> >> +    if ( !has_vhpet(d) || !d->arch.hvm.pl_time || !h->stime_freq )
> >>          return;
> >>  
> >>      write_lock(&h->lock);
> >> @@ -763,6 +763,8 @@ void hpet_deinit(struct domain *d)
> >>          for ( i = 0; i < HPET_TIMER_NUM; i++ )
> >>              if ( timer_enabled(h, i) )
> >>                  hpet_stop_timer(h, i, guest_time);
> >> +
> >> +        h->hpet.config = 0;
> >>      }
> >>  
> >>      write_unlock(&h->lock);
> >> --- a/xen/arch/x86/hvm/hvm.c
> >> +++ b/xen/arch/x86/hvm/hvm.c
> >> @@ -696,24 +696,24 @@ int hvm_domain_initialise(struct domain
> >>      return 0;
> >>  
> >>   fail2:
> >> -    rtc_deinit(d);
> >>      stdvga_deinit(d);
> >>      vioapic_deinit(d);
> >>   fail1:
> >>      if ( is_hardware_domain(d) )
> >>          xfree(d->arch.hvm.io_bitmap);
> >> -    xfree(d->arch.hvm.io_handler);
> >> -    xfree(d->arch.hvm.params);
> >> -    xfree(d->arch.hvm.pl_time);
> >> -    xfree(d->arch.hvm.irq);
> >> +    XFREE(d->arch.hvm.io_handler);
> >> +    XFREE(d->arch.hvm.params);
> >> +    XFREE(d->arch.hvm.pl_time);
> >> +    XFREE(d->arch.hvm.irq);
> >>   fail0:
> >>      hvm_destroy_cacheattr_region_list(d);
> >>      destroy_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0);
> >>   fail:
> >> -    viridian_domain_deinit(d);
> >> +    hvm_domain_relinquish_resources(d);
> > 
> > Would be nice to unify all those labels into a single fail label, but
> > I'm not sure if all functions are prepared to handle this.
> 
> That's the mid- to long-term plan, yes. But it has taken me quite a
> bit more time than intended to at least sort out this part. I really
> don't want to extend this right now (and even less so in this patch).
> 
> >> --- a/xen/arch/x86/hvm/ioreq.c
> >> +++ b/xen/arch/x86/hvm/ioreq.c
> >> @@ -1228,6 +1228,9 @@ void hvm_destroy_all_ioreq_servers(struc
> >>      struct hvm_ioreq_server *s;
> >>      unsigned int id;
> >>  
> >> +    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
> >> +        return;
> > 
> > Could you add a note here that the 'relocation' is just used as a mean
> > to figure out if iooreq servers have been setup (ie: the lock has been
> > initialized)?
> 
> Hmm, I'm explaining this in the description, and twice the same
> number I thought would make the purpose sufficiently clear
> (anyone who still wonders could still go back to the commit).
> 
> > I find this kind of dirty, and would need rework if other arches gain
> > ioreq support.
> 
> This is x86 code - how are other arches going to be affected?
> Even if they gain ioreq server support, the notion of port I/O
> in general will remain an x86 specific.
> 
> I agree it's a little dirty, but I consider it better than
> putting in another bool (and probably 7 bytes of padding) into
> struct hvm_domain.

Ack, I guess it would become clear enough for readers when seeing the
init function.

> 
> >> --- a/xen/arch/x86/hvm/rtc.c
> >> +++ b/xen/arch/x86/hvm/rtc.c
> >> @@ -844,7 +844,8 @@ void rtc_deinit(struct domain *d)
> >>  {
> >>      RTCState *s = domain_vrtc(d);
> >>  
> >> -    if ( !has_vrtc(d) )
> >> +    if ( !has_vrtc(d) || !d->arch.hvm.pl_time ||
> >> +         s->update_timer.status == TIMER_STATUS_invalid )
> > 
> > You could also check for the port registration AFAICT, which is maybe
> > more obvious?
> 
> You called that approach dirty above - I'd like to restrict it
> to just where no better alternative exists.

Ack, it didn't seem that bad here because this is a x86 emulated
device that relies on IO ports, while the ioreq code (albeit x86
specific ATM) could be used by other arches, and hence would likely
prefer to avoid using x86 specific details for generic functions, like
the init or deinit ones.

> > I also wonder whether all those time-related emulations could be
> > grouped into a single helper, that could have a d->arch.hvm.pl_time
> > instead of having to sprinkle such checks for each device?
> 
> Quite possible, but not here and not now.

Sure.

Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.