[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 07/11] ioreq: allow registering internal ioreq server handler




> -----Original Message-----
> From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> Sent: 03 September 2019 17:14
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>; Paul Durrant 
> <Paul.Durrant@xxxxxxxxxx>; Jan Beulich
> <jbeulich@xxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu 
> <wl@xxxxxxx>
> Subject: [PATCH v2 07/11] ioreq: allow registering internal ioreq server 
> handler
> 
> Provide a routine to register the handler for an internal ioreq
> server.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> Changes since v1:
>  - Allow to provide an opaque data parameter to pass to the handler.
>  - Allow changing the handler as long as the server is not enabled.
> ---
>  xen/arch/x86/hvm/ioreq.c        | 35 +++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/hvm/ioreq.h |  4 ++++
>  2 files changed, 39 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 8331a89eae..6339e5f884 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -485,6 +485,41 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, 
> bool buf)
>      return rc;
>  }
> 
> +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,

I did ask for 'hvm_set_ioreq_handler()'. I think it makes more sense as there's 
no corresponding 'hvm_remove_ioreq_handler()'.

> +                          int (*handler)(struct vcpu *v, ioreq_t *, void *),
> +                          void *data)
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc = 0;
> +
> +    if ( !hvm_ioreq_is_internal(id) )
> +    {
> +        rc = -EINVAL;
> +        goto out;

You just want to return here because you're not holding the lock.

  Paul

> +    }
> +
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> +    s = get_ioreq_server(d, id);
> +    if ( !s )
> +    {
> +        rc = -ENOENT;
> +        goto out;
> +    }
> +    if ( s->enabled )
> +    {
> +        rc = -EBUSY;
> +        goto out;
> +    }
> +
> +    s->handler = handler;
> +    s->data = data;
> +
> + out:
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
> +
> +    return rc;
> +}
> +
>  static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
>                                      struct hvm_ioreq_vcpu *sv)
>  {
> diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
> index c3917aa74d..90cc2aa938 100644
> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -54,6 +54,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool 
> buffered);
> 
>  void hvm_ioreq_init(struct domain *d);
> 
> +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
> +                          int (*handler)(struct vcpu *v, ioreq_t *, void *),
> +                          void *data);
> +
>  static inline bool hvm_ioreq_is_internal(unsigned int id)
>  {
>      ASSERT(id < MAX_NR_IOREQ_SERVERS);
> --
> 2.22.0

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.