[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4] pvcalls-front: Avoid get_free_pages(GFP_KERNEL) under spinlock


  • To: Wen Yang <wen.yang99@xxxxxxxxxx>, jgross@xxxxxxxx, sstabellini@xxxxxxxxxx
  • From: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
  • Date: Mon, 3 Dec 2018 10:59:25 -0500
  • Autocrypt: addr=boris.ostrovsky@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/ kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM Jg6OxFYd01z+a+oL
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, zhong.weidong@xxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, Julia Lawall <julia.lawall@xxxxxxx>
  • Delivery-date: Mon, 03 Dec 2018 15:59:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 12/1/18 1:33 AM, Wen Yang wrote:
> The problem is that we call this with a spin lock held.
> The call tree is:
> pvcalls_front_accept() holds bedata->socket_lock.
>     -> create_active()
>         -> __get_free_pages() uses GFP_KERNEL
>
> The create_active() function is only called from pvcalls_front_accept()
> with a spin_lock held, The allocation is not allowed to sleep and
> GFP_KERNEL is not sufficient.
>
> This issue was detected by using the Coccinelle software.
>
> v2: Add a function doing the allocations which is called
>     outside the lock and passing the allocated data to
>     create_active().
>
> v3: Use the matching deallocators i.e., free_page()
>     and free_pages(), respectively.
>
> v4: It would be better to pre-populate map (struct sock_mapping),
>     rather than introducing one more new struct.
>
> Suggested-by: Juergen Gross <jgross@xxxxxxxx>
> Suggested-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> Suggested-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> Signed-off-by: Wen Yang <wen.yang99@xxxxxxxxxx>
> CC: Julia Lawall <julia.lawall@xxxxxxx>
> CC: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> CC: Juergen Gross <jgross@xxxxxxxx>
> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> CC: xen-devel@xxxxxxxxxxxxxxxxxxxx
> CC: linux-kernel@xxxxxxxxxxxxxxx
> ---
>  drivers/xen/pvcalls-front.c | 57 ++++++++++++++++++++++++++++++-------
>  1 file changed, 46 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> index 77224d8f3e6f..555c9abdf58f 100644
> --- a/drivers/xen/pvcalls-front.c
> +++ b/drivers/xen/pvcalls-front.c
> @@ -335,13 +335,18 @@ int pvcalls_front_socket(struct socket *sock)
>       return ret;
>  }
>  
> -static int create_active(struct sock_mapping *map, int *evtchn)
> +static void free_active_ring(struct sock_mapping *map)
>  {
> -     void *bytes;
> -     int ret = -ENOMEM, irq = -1, i;
> +     if (!map)
> +             return;
> +     free_pages((unsigned long)map->active.data.in,
> +                     map->active.ring->ring_order);
> +     free_page((unsigned long)map->active.ring);
> +}
>  
> -     *evtchn = -1;
> -     init_waitqueue_head(&map->active.inflight_conn_req);
> +static int alloc_active_ring(struct sock_mapping *map)
> +{
> +     void *bytes;
>  
>       map->active.ring = (struct pvcalls_data_intf *)
>               __get_free_page(GFP_KERNEL | __GFP_ZERO);
> @@ -352,6 +357,26 @@ static int create_active(struct sock_mapping *map, int 
> *evtchn)
>                                       PVCALLS_RING_ORDER);
>       if (bytes == NULL)
>               goto out_error;
> +     map->active.data.in = bytes;
> +     map->active.data.out = bytes +
> +             XEN_FLEX_RING_SIZE(PVCALLS_RING_ORDER);
> +
> +     return 0;
> +
> +out_error:
> +     free_active_ring(map);
> +     return -ENOMEM;
> +}
> +
> +static int create_active(struct sock_mapping *map, int *evtchn)
> +{
> +     void *bytes;
> +     int ret = -ENOMEM, irq = -1, i;
> +
> +     *evtchn = -1;
> +     init_waitqueue_head(&map->active.inflight_conn_req);
> +
> +     bytes = map->active.data.in;

Why is this needed?

I may not be be reading the diff correctly, but your patch appears to be
whitespace-damaged and I can't apply it.

>       for (i = 0; i < (1 << PVCALLS_RING_ORDER); i++)
>               map->active.ring->ref[i] = gnttab_grant_foreign_access(
>                       pvcalls_front_dev->otherend_id,
> @@ -361,10 +386,6 @@ static int create_active(struct sock_mapping *map, int 
> *evtchn)
>               pvcalls_front_dev->otherend_id,
>               pfn_to_gfn(virt_to_pfn((void *)map->active.ring)), 0);
>  
> -     map->active.data.in = bytes;
> -     map->active.data.out = bytes +
> -             XEN_FLEX_RING_SIZE(PVCALLS_RING_ORDER);
> -
>       ret = xenbus_alloc_evtchn(pvcalls_front_dev, evtchn);
>       if (ret)
>               goto out_error;
> @@ -385,8 +406,7 @@ static int create_active(struct sock_mapping *map, int 
> *evtchn)
>  out_error:
>       if (*evtchn >= 0)
>               xenbus_free_evtchn(pvcalls_front_dev, *evtchn);
> -     free_pages((unsigned long)map->active.data.in, PVCALLS_RING_ORDER);
> -     free_page((unsigned long)map->active.ring);
> +     free_active_ring(map);


I think that since you are allocating the data outside of this call it
should also be freed outside, when create_active() fails.

-boris


>       return ret;
>  }
>  
> @@ -406,11 +426,17 @@ int pvcalls_front_connect(struct socket *sock, struct 
> sockaddr *addr,
>               return PTR_ERR(map);
>  
>       bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
> +     ret = alloc_active_ring(map);
> +     if (ret < 0) {
> +             pvcalls_exit_sock(sock);
> +             return ret;
> +     }
>  
>       spin_lock(&bedata->socket_lock);
>       ret = get_request(bedata, &req_id);
>       if (ret < 0) {
>               spin_unlock(&bedata->socket_lock);
> +             free_active_ring(map);
>               pvcalls_exit_sock(sock);
>               return ret;
>       }
> @@ -780,12 +806,20 @@ int pvcalls_front_accept(struct socket *sock, struct 
> socket *newsock, int flags)
>               }
>       }
>  
> +     ret = alloc_active_ring(map);
> +     if (ret < 0) {
> +             clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
> +                             (void *)&map->passive.flags);
> +             pvcalls_exit_sock(sock);
> +             return ret;
> +     }
>       spin_lock(&bedata->socket_lock);
>       ret = get_request(bedata, &req_id);
>       if (ret < 0) {
>               clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
>                         (void *)&map->passive.flags);
>               spin_unlock(&bedata->socket_lock);
> +             free_active_ring(map);
>               pvcalls_exit_sock(sock);
>               return ret;
>       }
> @@ -794,6 +828,7 @@ int pvcalls_front_accept(struct socket *sock, struct 
> socket *newsock, int flags)
>               clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
>                         (void *)&map->passive.flags);
>               spin_unlock(&bedata->socket_lock);
> +             free_active_ring(map);
>               pvcalls_exit_sock(sock);
>               return -ENOMEM;
>       }


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.