[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH v3 2/3] hvc_init(): Enforce one-time initialization.



On Tue, 2011-11-08 at 13:45 -0800, Miche Baker-Harvey wrote:
> hvc_init() must only be called once, and no thread should continue with 
> hvc_alloc()
> until after initialization is complete.  The original code does not enforce 
> either
> of these requirements.  A new mutex limits entry to hvc_init() to a single 
> thread,
> and blocks all later comers until it has completed.
> 
> This patch fixes multiple crash symptoms.

Hi Miche,

A few nit-picky comments below ..

> @@ -84,6 +85,10 @@ static LIST_HEAD(hvc_structs);
>   * list traversal.
>   */
>  static DEFINE_SPINLOCK(hvc_structs_lock);
> +/*
> + * only one task does allocation at a time.
> + */
> +static DEFINE_MUTEX(hvc_ports_mutex);

The comment is wrong, isn't it? Only one task does _init_ at a time.
Once the driver is initialised allocs can run concurrently.

So shouldn't it be called hvc_init_mutex ?

> @@ -825,11 +830,15 @@ struct hvc_struct *hvc_alloc(uint32_t vtermno, int data,
>       int i;
>  
>       /* We wait until a driver actually comes along */
> +     mutex_lock(&hvc_ports_mutex);
>       if (!hvc_driver) {
>               int err = hvc_init();
> -             if (err)
> +             if (err) {
> +                     mutex_unlock(&hvc_ports_mutex);
>                       return ERR_PTR(err);
> +             }
>       }
> +     mutex_unlock(&hvc_ports_mutex);
>  
>       hp = kzalloc(ALIGN(sizeof(*hp), sizeof(long)) + outbuf_size,
>                       GFP_KERNEL);

It'd be cleaner I think to do all the locking in hvc_init(). That's safe
as long as you recheck !hvc_driver in hvc_init() with the lock held.

cheers

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.