[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 for-4.14 2/3] xen/vm_event: add vm_event_check_pending_op



On Tue, Jun 2, 2020 at 5:47 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
>
> On Wed, May 20, 2020 at 08:31:53PM -0600, Tamas K Lengyel wrote:
> > Perform sanity checking when shutting vm_event down to determine whether
> > it is safe to do so. Error out with -EAGAIN in case pending operations
> > have been found for the domain.
> >
> > Signed-off-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
> > ---
> >  xen/arch/x86/vm_event.c        | 23 +++++++++++++++++++++++
> >  xen/common/vm_event.c          | 17 ++++++++++++++---
> >  xen/include/asm-arm/vm_event.h |  7 +++++++
> >  xen/include/asm-x86/vm_event.h |  2 ++
> >  4 files changed, 46 insertions(+), 3 deletions(-)
> >
> > diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
> > index 848d69c1b0..a23aadc112 100644
> > --- a/xen/arch/x86/vm_event.c
> > +++ b/xen/arch/x86/vm_event.c
> > @@ -297,6 +297,29 @@ void vm_event_emulate_check(struct vcpu *v, 
> > vm_event_response_t *rsp)
> >      };
> >  }
> >
> > +bool vm_event_check_pending_op(const struct vcpu *v)
> > +{
> > +    struct monitor_write_data *w = &v->arch.vm_event->write_data;
>
> const
>
> > +
> > +    if ( !v->arch.vm_event->sync_event )
> > +        return false;
> > +
> > +    if ( w->do_write.cr0 )
> > +        return true;
> > +    if ( w->do_write.cr3 )
> > +        return true;
> > +    if ( w->do_write.cr4 )
> > +        return true;
> > +    if ( w->do_write.msr )
> > +        return true;
> > +    if ( v->arch.vm_event->set_gprs )
> > +        return true;
> > +    if ( v->arch.vm_event->emulate_flags )
> > +        return true;
>
> Can you please group this into a single if, ie:
>
> if ( w->do_write.cr0 || w->do_write.cr3 || ... )
>     return true;
>
> > +
> > +    return false;
> > +}
> > +
> >  /*
> >   * Local variables:
> >   * mode: C
> > diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> > index 127f2d58f1..2df327a42c 100644
> > --- a/xen/common/vm_event.c
> > +++ b/xen/common/vm_event.c
> > @@ -183,6 +183,7 @@ static int vm_event_disable(struct domain *d, struct 
> > vm_event_domain **p_ved)
> >      if ( vm_event_check_ring(ved) )
> >      {
> >          struct vcpu *v;
> > +        bool pending_op = false;
> >
> >          spin_lock(&ved->lock);
> >
> > @@ -192,9 +193,6 @@ static int vm_event_disable(struct domain *d, struct 
> > vm_event_domain **p_ved)
> >              return -EBUSY;
> >          }
> >
> > -        /* Free domU's event channel and leave the other one unbound */
> > -        free_xen_event_channel(d, ved->xen_port);
> > -
> >          /* Unblock all vCPUs */
> >          for_each_vcpu ( d, v )
> >          {
> > @@ -203,8 +201,21 @@ static int vm_event_disable(struct domain *d, struct 
> > vm_event_domain **p_ved)
> >                  vcpu_unpause(v);
> >                  ved->blocked--;
> >              }
> > +
> > +            if ( vm_event_check_pending_op(v) )
> > +                pending_op = true;
>
> You could just do:
>
> pending_op |= vm_event_check_pending_op(v);
>
> and avoid the initialization of pending_op above. Or alternatively:
>
> if ( !pending_op && vm_event_check_pending_op(v) )
>     pending_op = true;
>
> Which avoid repeated calls to vm_event_check_pending_op when at least
> one vCPU is known to be busy.
>
> >          }
> >
> > +        /* vm_event ops are still pending until vCPUs get scheduled */
> > +        if ( pending_op )
> > +        {
> > +            spin_unlock(&ved->lock);
> > +            return -EAGAIN;
>
> What happens when this gets called from vm_event_cleanup?
>
> AFAICT the result there is ignored, and could leak the vm_event
> allocated memory?

Thanks for the feedback. I'm going to drop this patch at let
Bitdefender pick it up if they feel like fixing their buggy feature.
As things stand for my use-case I only need patch 1 from this series.

Tamas



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.