[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/6] xen-gntdev: Fix circular locking dependency



On 12/14/2010 06:55 AM, Daniel De Graaf wrote:
> apply_to_page_range will acquire PTE lock while priv->lock is held, and
> mn_invl_range_start tries to acquire priv->lock with PTE already held.
> Fix by not holding priv->lock during the entire map operation.

Is priv->lock needed to protect the contents of map?

    J

> Signed-off-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> ---
>  drivers/xen/gntdev.c |   19 +++++++++----------
>  1 files changed, 9 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index a33e443..c582804 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -581,23 +581,22 @@ static int gntdev_mmap(struct file *flip, struct 
> vm_area_struct *vma)
>       if (!(vma->vm_flags & VM_WRITE))
>               map->flags |= GNTMAP_readonly;
>  
> +     spin_unlock(&priv->lock);
> +
>       err = apply_to_page_range(vma->vm_mm, vma->vm_start,
>                                 vma->vm_end - vma->vm_start,
>                                 find_grant_ptes, map);
> -     if (err) {
> -             goto unlock_out;
> -             if (debug)
> -                     printk("%s: find_grant_ptes() failure.\n", 
> __FUNCTION__);
> -     }
> +     if (err)
> +             return err;
>  
>       err = map_grant_pages(map);
> -     if (err) {
> -             goto unlock_out;
> -             if (debug)
> -                     printk("%s: map_grant_pages() failure.\n", 
> __FUNCTION__);
> -     }
> +     if (err)
> +             return err;
> +
>       map->is_mapped = 1;
>  
> +     return 0;
> +
>  unlock_out:
>       spin_unlock(&priv->lock);
>       return err;


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.